Jan 22 04:00:21 np0005591760 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 22 04:00:21 np0005591760 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 22 04:00:21 np0005591760 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 04:00:21 np0005591760 kernel: BIOS-provided physical RAM map:
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 22 04:00:21 np0005591760 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Jan 22 04:00:21 np0005591760 kernel: NX (Execute Disable) protection: active
Jan 22 04:00:21 np0005591760 kernel: APIC: Static calls initialized
Jan 22 04:00:21 np0005591760 kernel: SMBIOS 2.8 present.
Jan 22 04:00:21 np0005591760 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Jan 22 04:00:21 np0005591760 kernel: Hypervisor detected: KVM
Jan 22 04:00:21 np0005591760 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 22 04:00:21 np0005591760 kernel: kvm-clock: using sched offset of 2861298397 cycles
Jan 22 04:00:21 np0005591760 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 22 04:00:21 np0005591760 kernel: tsc: Detected 2445.404 MHz processor
Jan 22 04:00:21 np0005591760 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Jan 22 04:00:21 np0005591760 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 22 04:00:21 np0005591760 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 22 04:00:21 np0005591760 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Jan 22 04:00:21 np0005591760 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Jan 22 04:00:21 np0005591760 kernel: Using GB pages for direct mapping
Jan 22 04:00:21 np0005591760 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 22 04:00:21 np0005591760 kernel: ACPI: Early table checksum verification disabled
Jan 22 04:00:21 np0005591760 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Jan 22 04:00:21 np0005591760 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 04:00:21 np0005591760 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 04:00:21 np0005591760 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 04:00:21 np0005591760 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Jan 22 04:00:21 np0005591760 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 04:00:21 np0005591760 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 04:00:21 np0005591760 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 04:00:21 np0005591760 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Jan 22 04:00:21 np0005591760 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Jan 22 04:00:21 np0005591760 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Jan 22 04:00:21 np0005591760 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Jan 22 04:00:21 np0005591760 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Jan 22 04:00:21 np0005591760 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Jan 22 04:00:21 np0005591760 kernel: No NUMA configuration found
Jan 22 04:00:21 np0005591760 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Jan 22 04:00:21 np0005591760 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Jan 22 04:00:21 np0005591760 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Jan 22 04:00:21 np0005591760 kernel: Zone ranges:
Jan 22 04:00:21 np0005591760 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 22 04:00:21 np0005591760 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 22 04:00:21 np0005591760 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Jan 22 04:00:21 np0005591760 kernel:  Device   empty
Jan 22 04:00:21 np0005591760 kernel: Movable zone start for each node
Jan 22 04:00:21 np0005591760 kernel: Early memory node ranges
Jan 22 04:00:21 np0005591760 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 22 04:00:21 np0005591760 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Jan 22 04:00:21 np0005591760 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Jan 22 04:00:21 np0005591760 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Jan 22 04:00:21 np0005591760 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 22 04:00:21 np0005591760 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 22 04:00:21 np0005591760 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 22 04:00:21 np0005591760 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 22 04:00:21 np0005591760 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 22 04:00:21 np0005591760 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 22 04:00:21 np0005591760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 04:00:21 np0005591760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 22 04:00:21 np0005591760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 22 04:00:21 np0005591760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 22 04:00:21 np0005591760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 22 04:00:21 np0005591760 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 22 04:00:21 np0005591760 kernel: TSC deadline timer available
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Max. logical packages:   4
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Max. logical dies:       4
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Max. dies per package:   1
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Max. threads per core:   1
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Num. cores per package:     1
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Num. threads per package:   1
Jan 22 04:00:21 np0005591760 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: KVM setup pv remote TLB flush
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: setup PV sched yield
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 22 04:00:21 np0005591760 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 22 04:00:21 np0005591760 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Jan 22 04:00:21 np0005591760 kernel: Booting paravirtualized kernel on KVM
Jan 22 04:00:21 np0005591760 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 22 04:00:21 np0005591760 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Jan 22 04:00:21 np0005591760 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: PV spinlocks enabled
Jan 22 04:00:21 np0005591760 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 04:00:21 np0005591760 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 22 04:00:21 np0005591760 kernel: random: crng init done
Jan 22 04:00:21 np0005591760 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: Fallback order for Node 0: 0 
Jan 22 04:00:21 np0005591760 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 22 04:00:21 np0005591760 kernel: Policy zone: Normal
Jan 22 04:00:21 np0005591760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 22 04:00:21 np0005591760 kernel: software IO TLB: area num 4.
Jan 22 04:00:21 np0005591760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jan 22 04:00:21 np0005591760 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 22 04:00:21 np0005591760 kernel: ftrace: allocated 194 pages with 3 groups
Jan 22 04:00:21 np0005591760 kernel: Dynamic Preempt: voluntary
Jan 22 04:00:21 np0005591760 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 22 04:00:21 np0005591760 kernel: rcu: #011RCU event tracing is enabled.
Jan 22 04:00:21 np0005591760 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Jan 22 04:00:21 np0005591760 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 22 04:00:21 np0005591760 kernel: #011Rude variant of Tasks RCU enabled.
Jan 22 04:00:21 np0005591760 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 22 04:00:21 np0005591760 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 22 04:00:21 np0005591760 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jan 22 04:00:21 np0005591760 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 22 04:00:21 np0005591760 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 22 04:00:21 np0005591760 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 22 04:00:21 np0005591760 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Jan 22 04:00:21 np0005591760 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 22 04:00:21 np0005591760 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 22 04:00:21 np0005591760 kernel: Console: colour VGA+ 80x25
Jan 22 04:00:21 np0005591760 kernel: printk: console [ttyS0] enabled
Jan 22 04:00:21 np0005591760 kernel: ACPI: Core revision 20230331
Jan 22 04:00:21 np0005591760 kernel: APIC: Switch to symmetric I/O mode setup
Jan 22 04:00:21 np0005591760 kernel: x2apic enabled
Jan 22 04:00:21 np0005591760 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Jan 22 04:00:21 np0005591760 kernel: kvm-guest: setup PV IPIs
Jan 22 04:00:21 np0005591760 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 22 04:00:21 np0005591760 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404)
Jan 22 04:00:21 np0005591760 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 22 04:00:21 np0005591760 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 22 04:00:21 np0005591760 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 22 04:00:21 np0005591760 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 22 04:00:21 np0005591760 kernel: Spectre V2 : Mitigation: Retpolines
Jan 22 04:00:21 np0005591760 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 22 04:00:21 np0005591760 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Jan 22 04:00:21 np0005591760 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 22 04:00:21 np0005591760 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 22 04:00:21 np0005591760 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 22 04:00:21 np0005591760 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 22 04:00:21 np0005591760 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 22 04:00:21 np0005591760 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Jan 22 04:00:21 np0005591760 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Jan 22 04:00:21 np0005591760 kernel: Freeing SMP alternatives memory: 40K
Jan 22 04:00:21 np0005591760 kernel: pid_max: default: 32768 minimum: 301
Jan 22 04:00:21 np0005591760 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 22 04:00:21 np0005591760 kernel: landlock: Up and running.
Jan 22 04:00:21 np0005591760 kernel: Yama: becoming mindful.
Jan 22 04:00:21 np0005591760 kernel: SELinux:  Initializing.
Jan 22 04:00:21 np0005591760 kernel: LSM support for eBPF active
Jan 22 04:00:21 np0005591760 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Jan 22 04:00:21 np0005591760 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 22 04:00:21 np0005591760 kernel: ... version:                0
Jan 22 04:00:21 np0005591760 kernel: ... bit width:              48
Jan 22 04:00:21 np0005591760 kernel: ... generic registers:      6
Jan 22 04:00:21 np0005591760 kernel: ... value mask:             0000ffffffffffff
Jan 22 04:00:21 np0005591760 kernel: ... max period:             00007fffffffffff
Jan 22 04:00:21 np0005591760 kernel: ... fixed-purpose events:   0
Jan 22 04:00:21 np0005591760 kernel: ... event mask:             000000000000003f
Jan 22 04:00:21 np0005591760 kernel: signal: max sigframe size: 3376
Jan 22 04:00:21 np0005591760 kernel: rcu: Hierarchical SRCU implementation.
Jan 22 04:00:21 np0005591760 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 22 04:00:21 np0005591760 kernel: smp: Bringing up secondary CPUs ...
Jan 22 04:00:21 np0005591760 kernel: smpboot: x86: Booting SMP configuration:
Jan 22 04:00:21 np0005591760 kernel: .... node  #0, CPUs:      #1 #2 #3
Jan 22 04:00:21 np0005591760 kernel: smp: Brought up 1 node, 4 CPUs
Jan 22 04:00:21 np0005591760 kernel: smpboot: Total of 4 processors activated (19563.23 BogoMIPS)
Jan 22 04:00:21 np0005591760 kernel: node 0 deferred pages initialised in 9ms
Jan 22 04:00:21 np0005591760 kernel: Memory: 7766072K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 617308K reserved, 0K cma-reserved)
Jan 22 04:00:21 np0005591760 kernel: devtmpfs: initialized
Jan 22 04:00:21 np0005591760 kernel: x86/mm: Memory block size: 128MB
Jan 22 04:00:21 np0005591760 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 22 04:00:21 np0005591760 kernel: futex hash table entries: 1024 (65536 bytes on 1 NUMA nodes, total 64 KiB, linear).
Jan 22 04:00:21 np0005591760 kernel: pinctrl core: initialized pinctrl subsystem
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 22 04:00:21 np0005591760 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 22 04:00:21 np0005591760 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 22 04:00:21 np0005591760 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 22 04:00:21 np0005591760 kernel: audit: initializing netlink subsys (disabled)
Jan 22 04:00:21 np0005591760 kernel: audit: type=2000 audit(1769072420.997:1): state=initialized audit_enabled=0 res=1
Jan 22 04:00:21 np0005591760 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 22 04:00:21 np0005591760 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 22 04:00:21 np0005591760 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 22 04:00:21 np0005591760 kernel: cpuidle: using governor menu
Jan 22 04:00:21 np0005591760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 04:00:21 np0005591760 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Jan 22 04:00:21 np0005591760 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Jan 22 04:00:21 np0005591760 kernel: PCI: Using configuration type 1 for base access
Jan 22 04:00:21 np0005591760 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 22 04:00:21 np0005591760 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 22 04:00:21 np0005591760 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 22 04:00:21 np0005591760 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 22 04:00:21 np0005591760 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 22 04:00:21 np0005591760 kernel: Demotion targets for Node 0: null
Jan 22 04:00:21 np0005591760 kernel: cryptd: max_cpu_qlen set to 1000
Jan 22 04:00:21 np0005591760 kernel: ACPI: Added _OSI(Module Device)
Jan 22 04:00:21 np0005591760 kernel: ACPI: Added _OSI(Processor Device)
Jan 22 04:00:21 np0005591760 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 04:00:21 np0005591760 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 22 04:00:21 np0005591760 kernel: ACPI: Interpreter enabled
Jan 22 04:00:21 np0005591760 kernel: ACPI: PM: (supports S0 S5)
Jan 22 04:00:21 np0005591760 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 22 04:00:21 np0005591760 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 04:00:21 np0005591760 kernel: PCI: Using E820 reservations for host bridge windows
Jan 22 04:00:21 np0005591760 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 04:00:21 np0005591760 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 22 04:00:21 np0005591760 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Jan 22 04:00:21 np0005591760 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Jan 22 04:00:21 np0005591760 kernel: PCI host bridge to bus 0000:00
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:02: extended config space not accessible
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [1] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [2] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [3] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [4] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [5] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [6] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [7] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [8] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [9] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [10] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [11] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [12] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [13] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [14] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [15] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [16] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [17] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [18] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [19] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [20] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [21] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [22] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [23] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [24] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [25] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [26] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [27] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [28] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [29] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [30] registered
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [31] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-2] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-3] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-4] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-5] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Jan 22 04:00:21 np0005591760 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-6] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-7] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-8] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-9] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-10] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-11] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-12] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-13] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-14] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-15] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-16] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Jan 22 04:00:21 np0005591760 kernel: acpiphp: Slot [0-17] registered
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Jan 22 04:00:21 np0005591760 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Jan 22 04:00:21 np0005591760 kernel: iommu: Default domain type: Translated
Jan 22 04:00:21 np0005591760 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 22 04:00:21 np0005591760 kernel: SCSI subsystem initialized
Jan 22 04:00:21 np0005591760 kernel: ACPI: bus type USB registered
Jan 22 04:00:21 np0005591760 kernel: usbcore: registered new interface driver usbfs
Jan 22 04:00:21 np0005591760 kernel: usbcore: registered new interface driver hub
Jan 22 04:00:21 np0005591760 kernel: usbcore: registered new device driver usb
Jan 22 04:00:21 np0005591760 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 22 04:00:21 np0005591760 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 22 04:00:21 np0005591760 kernel: PTP clock support registered
Jan 22 04:00:21 np0005591760 kernel: EDAC MC: Ver: 3.0.0
Jan 22 04:00:21 np0005591760 kernel: NetLabel: Initializing
Jan 22 04:00:21 np0005591760 kernel: NetLabel:  domain hash size = 128
Jan 22 04:00:21 np0005591760 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 22 04:00:21 np0005591760 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 22 04:00:21 np0005591760 kernel: PCI: Using ACPI for IRQ routing
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 22 04:00:21 np0005591760 kernel: vgaarb: loaded
Jan 22 04:00:21 np0005591760 kernel: clocksource: Switched to clocksource kvm-clock
Jan 22 04:00:21 np0005591760 kernel: VFS: Disk quotas dquot_6.6.0
Jan 22 04:00:21 np0005591760 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 04:00:21 np0005591760 kernel: pnp: PnP ACPI init
Jan 22 04:00:21 np0005591760 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Jan 22 04:00:21 np0005591760 kernel: pnp: PnP ACPI: found 5 devices
Jan 22 04:00:21 np0005591760 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_INET protocol family
Jan 22 04:00:21 np0005591760 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 04:00:21 np0005591760 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_XDP protocol family
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Jan 22 04:00:21 np0005591760 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Jan 22 04:00:21 np0005591760 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Jan 22 04:00:21 np0005591760 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Jan 22 04:00:21 np0005591760 kernel: PCI: CLS 0 bytes, default 64
Jan 22 04:00:21 np0005591760 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 04:00:21 np0005591760 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Jan 22 04:00:21 np0005591760 kernel: Trying to unpack rootfs image as initramfs...
Jan 22 04:00:21 np0005591760 kernel: ACPI: bus type thunderbolt registered
Jan 22 04:00:21 np0005591760 kernel: Initialise system trusted keyrings
Jan 22 04:00:21 np0005591760 kernel: Key type blacklist registered
Jan 22 04:00:21 np0005591760 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 22 04:00:21 np0005591760 kernel: zbud: loaded
Jan 22 04:00:21 np0005591760 kernel: integrity: Platform Keyring initialized
Jan 22 04:00:21 np0005591760 kernel: integrity: Machine keyring initialized
Jan 22 04:00:21 np0005591760 kernel: Freeing initrd memory: 87956K
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_ALG protocol family
Jan 22 04:00:21 np0005591760 kernel: xor: automatically using best checksumming function   avx       
Jan 22 04:00:21 np0005591760 kernel: Key type asymmetric registered
Jan 22 04:00:21 np0005591760 kernel: Asymmetric key parser 'x509' registered
Jan 22 04:00:21 np0005591760 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 22 04:00:21 np0005591760 kernel: io scheduler mq-deadline registered
Jan 22 04:00:21 np0005591760 kernel: io scheduler kyber registered
Jan 22 04:00:21 np0005591760 kernel: io scheduler bfq registered
Jan 22 04:00:21 np0005591760 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Jan 22 04:00:21 np0005591760 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Jan 22 04:00:21 np0005591760 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Jan 22 04:00:21 np0005591760 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Jan 22 04:00:21 np0005591760 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Jan 22 04:00:21 np0005591760 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Jan 22 04:00:21 np0005591760 kernel: shpchp 0000:01:00.0: Slot initialization failed
Jan 22 04:00:21 np0005591760 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 04:00:21 np0005591760 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 22 04:00:21 np0005591760 kernel: ACPI: button: Power Button [PWRF]
Jan 22 04:00:21 np0005591760 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Jan 22 04:00:21 np0005591760 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 04:00:21 np0005591760 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 04:00:21 np0005591760 kernel: Non-volatile memory driver v1.3
Jan 22 04:00:21 np0005591760 kernel: rdac: device handler registered
Jan 22 04:00:21 np0005591760 kernel: hp_sw: device handler registered
Jan 22 04:00:21 np0005591760 kernel: emc: device handler registered
Jan 22 04:00:21 np0005591760 kernel: alua: device handler registered
Jan 22 04:00:21 np0005591760 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Jan 22 04:00:21 np0005591760 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Jan 22 04:00:21 np0005591760 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Jan 22 04:00:21 np0005591760 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Jan 22 04:00:21 np0005591760 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 22 04:00:21 np0005591760 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 04:00:21 np0005591760 kernel: usb usb1: Product: UHCI Host Controller
Jan 22 04:00:21 np0005591760 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 22 04:00:21 np0005591760 kernel: usb usb1: SerialNumber: 0000:02:01.0
Jan 22 04:00:21 np0005591760 kernel: hub 1-0:1.0: USB hub found
Jan 22 04:00:21 np0005591760 kernel: hub 1-0:1.0: 2 ports detected
Jan 22 04:00:21 np0005591760 kernel: usbcore: registered new interface driver usbserial_generic
Jan 22 04:00:21 np0005591760 kernel: usbserial: USB Serial support registered for generic
Jan 22 04:00:21 np0005591760 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 22 04:00:21 np0005591760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 04:00:21 np0005591760 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 22 04:00:21 np0005591760 kernel: mousedev: PS/2 mouse device common for all mice
Jan 22 04:00:21 np0005591760 kernel: rtc_cmos 00:03: RTC can wake from S4
Jan 22 04:00:21 np0005591760 kernel: rtc_cmos 00:03: registered as rtc0
Jan 22 04:00:21 np0005591760 kernel: rtc_cmos 00:03: setting system clock to 2026-01-22T09:00:21 UTC (1769072421)
Jan 22 04:00:21 np0005591760 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Jan 22 04:00:21 np0005591760 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 22 04:00:21 np0005591760 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 22 04:00:21 np0005591760 kernel: usbcore: registered new interface driver usbhid
Jan 22 04:00:21 np0005591760 kernel: usbhid: USB HID core driver
Jan 22 04:00:21 np0005591760 kernel: drop_monitor: Initializing network drop monitor service
Jan 22 04:00:21 np0005591760 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 22 04:00:21 np0005591760 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 22 04:00:21 np0005591760 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 22 04:00:21 np0005591760 kernel: Initializing XFRM netlink socket
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_INET6 protocol family
Jan 22 04:00:21 np0005591760 kernel: Segment Routing with IPv6
Jan 22 04:00:21 np0005591760 kernel: NET: Registered PF_PACKET protocol family
Jan 22 04:00:21 np0005591760 kernel: mpls_gso: MPLS GSO support
Jan 22 04:00:21 np0005591760 kernel: IPI shorthand broadcast: enabled
Jan 22 04:00:21 np0005591760 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 22 04:00:21 np0005591760 kernel: AES CTR mode by8 optimization enabled
Jan 22 04:00:21 np0005591760 kernel: sched_clock: Marking stable (947001756, 142988607)->(1297492721, -207502358)
Jan 22 04:00:21 np0005591760 kernel: registered taskstats version 1
Jan 22 04:00:21 np0005591760 kernel: Loading compiled-in X.509 certificates
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 22 04:00:21 np0005591760 kernel: Demotion targets for Node 0: null
Jan 22 04:00:21 np0005591760 kernel: page_owner is disabled
Jan 22 04:00:21 np0005591760 kernel: Key type .fscrypt registered
Jan 22 04:00:21 np0005591760 kernel: Key type fscrypt-provisioning registered
Jan 22 04:00:21 np0005591760 kernel: Key type big_key registered
Jan 22 04:00:21 np0005591760 kernel: Key type encrypted registered
Jan 22 04:00:21 np0005591760 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 22 04:00:21 np0005591760 kernel: Loading compiled-in module X.509 certificates
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 04:00:21 np0005591760 kernel: ima: Allocated hash algorithm: sha256
Jan 22 04:00:21 np0005591760 kernel: ima: No architecture policies found
Jan 22 04:00:21 np0005591760 kernel: evm: Initialising EVM extended attributes:
Jan 22 04:00:21 np0005591760 kernel: evm: security.selinux
Jan 22 04:00:21 np0005591760 kernel: evm: security.SMACK64 (disabled)
Jan 22 04:00:21 np0005591760 kernel: evm: security.SMACK64EXEC (disabled)
Jan 22 04:00:21 np0005591760 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 22 04:00:21 np0005591760 kernel: evm: security.SMACK64MMAP (disabled)
Jan 22 04:00:21 np0005591760 kernel: evm: security.apparmor (disabled)
Jan 22 04:00:21 np0005591760 kernel: evm: security.ima
Jan 22 04:00:21 np0005591760 kernel: evm: security.capability
Jan 22 04:00:21 np0005591760 kernel: evm: HMAC attrs: 0x1
Jan 22 04:00:21 np0005591760 kernel: Running certificate verification RSA selftest
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 22 04:00:21 np0005591760 kernel: Running certificate verification ECDSA selftest
Jan 22 04:00:21 np0005591760 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 22 04:00:21 np0005591760 kernel: clk: Disabling unused clocks
Jan 22 04:00:21 np0005591760 kernel: Freeing unused decrypted memory: 2028K
Jan 22 04:00:21 np0005591760 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 22 04:00:21 np0005591760 kernel: Write protecting the kernel read-only data: 30720k
Jan 22 04:00:21 np0005591760 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 22 04:00:21 np0005591760 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 22 04:00:21 np0005591760 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 22 04:00:21 np0005591760 kernel: Run /init as init process
Jan 22 04:00:21 np0005591760 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 04:00:21 np0005591760 systemd: Detected virtualization kvm.
Jan 22 04:00:21 np0005591760 systemd: Detected architecture x86-64.
Jan 22 04:00:21 np0005591760 systemd: Running in initrd.
Jan 22 04:00:21 np0005591760 systemd: No hostname configured, using default hostname.
Jan 22 04:00:21 np0005591760 systemd: Hostname set to <localhost>.
Jan 22 04:00:21 np0005591760 systemd: Initializing machine ID from VM UUID.
Jan 22 04:00:21 np0005591760 systemd: Queued start job for default target Initrd Default Target.
Jan 22 04:00:21 np0005591760 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 04:00:21 np0005591760 systemd: Reached target Local Encrypted Volumes.
Jan 22 04:00:21 np0005591760 systemd: Reached target Initrd /usr File System.
Jan 22 04:00:21 np0005591760 systemd: Reached target Local File Systems.
Jan 22 04:00:21 np0005591760 systemd: Reached target Path Units.
Jan 22 04:00:21 np0005591760 systemd: Reached target Slice Units.
Jan 22 04:00:21 np0005591760 systemd: Reached target Swaps.
Jan 22 04:00:21 np0005591760 systemd: Reached target Timer Units.
Jan 22 04:00:21 np0005591760 systemd: Listening on D-Bus System Message Bus Socket.
Jan 22 04:00:21 np0005591760 systemd: Listening on Journal Socket (/dev/log).
Jan 22 04:00:21 np0005591760 systemd: Listening on Journal Socket.
Jan 22 04:00:21 np0005591760 systemd: Listening on udev Control Socket.
Jan 22 04:00:21 np0005591760 systemd: Listening on udev Kernel Socket.
Jan 22 04:00:21 np0005591760 systemd: Reached target Socket Units.
Jan 22 04:00:21 np0005591760 systemd: Starting Create List of Static Device Nodes...
Jan 22 04:00:21 np0005591760 systemd: Starting Journal Service...
Jan 22 04:00:21 np0005591760 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 04:00:21 np0005591760 systemd: Starting Apply Kernel Variables...
Jan 22 04:00:21 np0005591760 systemd: Starting Create System Users...
Jan 22 04:00:21 np0005591760 systemd: Starting Setup Virtual Console...
Jan 22 04:00:21 np0005591760 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 22 04:00:21 np0005591760 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 22 04:00:21 np0005591760 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 22 04:00:21 np0005591760 kernel: usb 1-1: Manufacturer: QEMU
Jan 22 04:00:21 np0005591760 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Jan 22 04:00:21 np0005591760 systemd: Finished Create List of Static Device Nodes.
Jan 22 04:00:21 np0005591760 systemd: Finished Apply Kernel Variables.
Jan 22 04:00:21 np0005591760 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 22 04:00:21 np0005591760 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Jan 22 04:00:21 np0005591760 systemd-journald[284]: Journal started
Jan 22 04:00:21 np0005591760 systemd-journald[284]: Runtime Journal (/run/log/journal/85714d7ebe4c45769a223776a24eda65) is 8.0M, max 153.6M, 145.6M free.
Jan 22 04:00:21 np0005591760 systemd: Started Journal Service.
Jan 22 04:00:21 np0005591760 systemd-sysusers[287]: Creating group 'users' with GID 100.
Jan 22 04:00:21 np0005591760 systemd-sysusers[287]: Creating group 'dbus' with GID 81.
Jan 22 04:00:21 np0005591760 systemd-sysusers[287]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 22 04:00:21 np0005591760 systemd[1]: Finished Create System Users.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 04:00:22 np0005591760 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 04:00:22 np0005591760 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 04:00:22 np0005591760 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 04:00:22 np0005591760 systemd[1]: Finished Setup Virtual Console.
Jan 22 04:00:22 np0005591760 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting dracut cmdline hook...
Jan 22 04:00:22 np0005591760 dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Jan 22 04:00:22 np0005591760 dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 04:00:22 np0005591760 systemd[1]: Finished dracut cmdline hook.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting dracut pre-udev hook...
Jan 22 04:00:22 np0005591760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 22 04:00:22 np0005591760 kernel: device-mapper: uevent: version 1.0.3
Jan 22 04:00:22 np0005591760 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 22 04:00:22 np0005591760 kernel: RPC: Registered named UNIX socket transport module.
Jan 22 04:00:22 np0005591760 kernel: RPC: Registered udp transport module.
Jan 22 04:00:22 np0005591760 kernel: RPC: Registered tcp transport module.
Jan 22 04:00:22 np0005591760 kernel: RPC: Registered tcp-with-tls transport module.
Jan 22 04:00:22 np0005591760 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 04:00:22 np0005591760 rpc.statd[415]: Version 2.5.4 starting
Jan 22 04:00:22 np0005591760 rpc.statd[415]: Initializing NSM state
Jan 22 04:00:22 np0005591760 rpc.idmapd[420]: Setting log level to 0
Jan 22 04:00:22 np0005591760 systemd[1]: Finished dracut pre-udev hook.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 04:00:22 np0005591760 systemd-udevd[433]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 04:00:22 np0005591760 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting dracut pre-trigger hook...
Jan 22 04:00:22 np0005591760 systemd[1]: Finished dracut pre-trigger hook.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting Coldplug All udev Devices...
Jan 22 04:00:22 np0005591760 systemd[1]: Created slice Slice /system/modprobe.
Jan 22 04:00:22 np0005591760 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 04:00:22 np0005591760 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 04:00:22 np0005591760 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 04:00:22 np0005591760 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 04:00:22 np0005591760 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 04:00:22 np0005591760 systemd[1]: Reached target Network.
Jan 22 04:00:22 np0005591760 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 04:00:22 np0005591760 systemd[1]: Starting dracut initqueue hook...
Jan 22 04:00:22 np0005591760 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Jan 22 04:00:22 np0005591760 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 22 04:00:22 np0005591760 kernel: vda: vda1
Jan 22 04:00:22 np0005591760 systemd-udevd[434]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:00:22 np0005591760 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Jan 22 04:00:22 np0005591760 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Jan 22 04:00:22 np0005591760 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Jan 22 04:00:22 np0005591760 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Jan 22 04:00:22 np0005591760 kernel: scsi host0: ahci
Jan 22 04:00:22 np0005591760 kernel: scsi host1: ahci
Jan 22 04:00:22 np0005591760 kernel: scsi host2: ahci
Jan 22 04:00:22 np0005591760 kernel: scsi host3: ahci
Jan 22 04:00:22 np0005591760 kernel: scsi host4: ahci
Jan 22 04:00:22 np0005591760 kernel: scsi host5: ahci
Jan 22 04:00:22 np0005591760 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Jan 22 04:00:22 np0005591760 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Jan 22 04:00:22 np0005591760 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Jan 22 04:00:22 np0005591760 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Jan 22 04:00:22 np0005591760 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Jan 22 04:00:22 np0005591760 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Jan 22 04:00:22 np0005591760 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 04:00:22 np0005591760 systemd[1]: Reached target Initrd Root Device.
Jan 22 04:00:22 np0005591760 systemd[1]: Mounting Kernel Configuration File System...
Jan 22 04:00:22 np0005591760 systemd[1]: Mounted Kernel Configuration File System.
Jan 22 04:00:22 np0005591760 systemd[1]: Reached target System Initialization.
Jan 22 04:00:22 np0005591760 systemd[1]: Reached target Basic System.
Jan 22 04:00:22 np0005591760 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Jan 22 04:00:22 np0005591760 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Jan 22 04:00:22 np0005591760 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Jan 22 04:00:22 np0005591760 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Jan 22 04:00:22 np0005591760 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Jan 22 04:00:22 np0005591760 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Jan 22 04:00:22 np0005591760 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 22 04:00:22 np0005591760 kernel: ata1.00: applying bridge limits
Jan 22 04:00:22 np0005591760 kernel: ata1.00: configured for UDMA/100
Jan 22 04:00:22 np0005591760 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 22 04:00:22 np0005591760 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 22 04:00:22 np0005591760 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 22 04:00:22 np0005591760 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 04:00:23 np0005591760 systemd[1]: Finished dracut initqueue hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Remote File Systems.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting dracut pre-mount hook...
Jan 22 04:00:23 np0005591760 systemd[1]: Finished dracut pre-mount hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 22 04:00:23 np0005591760 systemd-fsck[528]: /usr/sbin/fsck.xfs: XFS file system.
Jan 22 04:00:23 np0005591760 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 04:00:23 np0005591760 systemd[1]: Mounting /sysroot...
Jan 22 04:00:23 np0005591760 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 22 04:00:23 np0005591760 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 22 04:00:23 np0005591760 kernel: XFS (vda1): Ending clean mount
Jan 22 04:00:23 np0005591760 systemd[1]: Mounted /sysroot.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Initrd Root File System.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 22 04:00:23 np0005591760 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Initrd File Systems.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Initrd Default Target.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting dracut mount hook...
Jan 22 04:00:23 np0005591760 systemd[1]: Finished dracut mount hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 22 04:00:23 np0005591760 rpc.idmapd[420]: exiting on signal 15
Jan 22 04:00:23 np0005591760 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Network.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Timer Units.
Jan 22 04:00:23 np0005591760 systemd[1]: dbus.socket: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Initrd Default Target.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Basic System.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Initrd Root Device.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Initrd /usr File System.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Path Units.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Remote File Systems.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Slice Units.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Socket Units.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target System Initialization.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Local File Systems.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Swaps.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut mount hook.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut pre-mount hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut initqueue hook.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Coldplug All udev Devices.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut pre-trigger hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Setup Virtual Console.
Jan 22 04:00:23 np0005591760 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Closed udev Control Socket.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Closed udev Kernel Socket.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut pre-udev hook.
Jan 22 04:00:23 np0005591760 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped dracut cmdline hook.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting Cleanup udev Database...
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 22 04:00:23 np0005591760 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 22 04:00:23 np0005591760 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Stopped Create System Users.
Jan 22 04:00:23 np0005591760 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 22 04:00:23 np0005591760 systemd[1]: Finished Cleanup udev Database.
Jan 22 04:00:23 np0005591760 systemd[1]: Reached target Switch Root.
Jan 22 04:00:23 np0005591760 systemd[1]: Starting Switch Root...
Jan 22 04:00:23 np0005591760 systemd[1]: Switching root.
Jan 22 04:00:23 np0005591760 systemd-journald[284]: Journal stopped
Jan 22 04:00:24 np0005591760 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 22 04:00:24 np0005591760 kernel: audit: type=1404 audit(1769072423.745:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:00:24 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:00:24 np0005591760 kernel: audit: type=1403 audit(1769072423.855:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 22 04:00:24 np0005591760 systemd: Successfully loaded SELinux policy in 114.021ms.
Jan 22 04:00:24 np0005591760 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.706ms.
Jan 22 04:00:24 np0005591760 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 04:00:24 np0005591760 systemd: Detected virtualization kvm.
Jan 22 04:00:24 np0005591760 systemd: Detected architecture x86-64.
Jan 22 04:00:24 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:00:24 np0005591760 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd: Stopped Switch Root.
Jan 22 04:00:24 np0005591760 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 22 04:00:24 np0005591760 systemd: Created slice Slice /system/getty.
Jan 22 04:00:24 np0005591760 systemd: Created slice Slice /system/serial-getty.
Jan 22 04:00:24 np0005591760 systemd: Created slice Slice /system/sshd-keygen.
Jan 22 04:00:24 np0005591760 systemd: Created slice User and Session Slice.
Jan 22 04:00:24 np0005591760 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 04:00:24 np0005591760 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 22 04:00:24 np0005591760 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 22 04:00:24 np0005591760 systemd: Reached target Local Encrypted Volumes.
Jan 22 04:00:24 np0005591760 systemd: Stopped target Switch Root.
Jan 22 04:00:24 np0005591760 systemd: Stopped target Initrd File Systems.
Jan 22 04:00:24 np0005591760 systemd: Stopped target Initrd Root File System.
Jan 22 04:00:24 np0005591760 systemd: Reached target Local Integrity Protected Volumes.
Jan 22 04:00:24 np0005591760 systemd: Reached target Path Units.
Jan 22 04:00:24 np0005591760 systemd: Reached target rpc_pipefs.target.
Jan 22 04:00:24 np0005591760 systemd: Reached target Slice Units.
Jan 22 04:00:24 np0005591760 systemd: Reached target Swaps.
Jan 22 04:00:24 np0005591760 systemd: Reached target Local Verity Protected Volumes.
Jan 22 04:00:24 np0005591760 systemd: Listening on RPCbind Server Activation Socket.
Jan 22 04:00:24 np0005591760 systemd: Reached target RPC Port Mapper.
Jan 22 04:00:24 np0005591760 systemd: Listening on Process Core Dump Socket.
Jan 22 04:00:24 np0005591760 systemd: Listening on initctl Compatibility Named Pipe.
Jan 22 04:00:24 np0005591760 systemd: Listening on udev Control Socket.
Jan 22 04:00:24 np0005591760 systemd: Listening on udev Kernel Socket.
Jan 22 04:00:24 np0005591760 systemd: Mounting Huge Pages File System...
Jan 22 04:00:24 np0005591760 systemd: Mounting POSIX Message Queue File System...
Jan 22 04:00:24 np0005591760 systemd: Mounting Kernel Debug File System...
Jan 22 04:00:24 np0005591760 systemd: Mounting Kernel Trace File System...
Jan 22 04:00:24 np0005591760 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 04:00:24 np0005591760 systemd: Starting Create List of Static Device Nodes...
Jan 22 04:00:24 np0005591760 systemd: Starting Load Kernel Module configfs...
Jan 22 04:00:24 np0005591760 systemd: Starting Load Kernel Module drm...
Jan 22 04:00:24 np0005591760 systemd: Starting Load Kernel Module efi_pstore...
Jan 22 04:00:24 np0005591760 systemd: Starting Load Kernel Module fuse...
Jan 22 04:00:24 np0005591760 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 22 04:00:24 np0005591760 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd: Stopped File System Check on Root Device.
Jan 22 04:00:24 np0005591760 systemd: Stopped Journal Service.
Jan 22 04:00:24 np0005591760 systemd: Starting Journal Service...
Jan 22 04:00:24 np0005591760 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 04:00:24 np0005591760 systemd: Starting Generate network units from Kernel command line...
Jan 22 04:00:24 np0005591760 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 04:00:24 np0005591760 kernel: fuse: init (API version 7.37)
Jan 22 04:00:24 np0005591760 systemd: Starting Remount Root and Kernel File Systems...
Jan 22 04:00:24 np0005591760 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 22 04:00:24 np0005591760 systemd: Starting Apply Kernel Variables...
Jan 22 04:00:24 np0005591760 systemd: Starting Coldplug All udev Devices...
Jan 22 04:00:24 np0005591760 systemd: Mounted Huge Pages File System.
Jan 22 04:00:24 np0005591760 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 22 04:00:24 np0005591760 systemd: Mounted POSIX Message Queue File System.
Jan 22 04:00:24 np0005591760 systemd: Mounted Kernel Debug File System.
Jan 22 04:00:24 np0005591760 systemd-journald[649]: Journal started
Jan 22 04:00:24 np0005591760 systemd-journald[649]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 04:00:24 np0005591760 systemd[1]: Queued start job for default target Multi-User System.
Jan 22 04:00:24 np0005591760 systemd: Mounted Kernel Trace File System.
Jan 22 04:00:24 np0005591760 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd: Started Journal Service.
Jan 22 04:00:24 np0005591760 kernel: ACPI: bus type drm_connector registered
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Create List of Static Device Nodes.
Jan 22 04:00:24 np0005591760 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 04:00:24 np0005591760 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Load Kernel Module drm.
Jan 22 04:00:24 np0005591760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 22 04:00:24 np0005591760 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Load Kernel Module fuse.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Generate network units from Kernel command line.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Apply Kernel Variables.
Jan 22 04:00:24 np0005591760 systemd[1]: Mounting FUSE Control File System...
Jan 22 04:00:24 np0005591760 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Rebuild Hardware Database...
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 22 04:00:24 np0005591760 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Load/Save OS Random Seed...
Jan 22 04:00:24 np0005591760 systemd-journald[649]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 04:00:24 np0005591760 systemd-journald[649]: Received client request to flush runtime journal.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Create System Users...
Jan 22 04:00:24 np0005591760 systemd[1]: Mounted FUSE Control File System.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Load/Save OS Random Seed.
Jan 22 04:00:24 np0005591760 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Create System Users.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target Preparation for Local File Systems.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target Local File Systems.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 22 04:00:24 np0005591760 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 22 04:00:24 np0005591760 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 22 04:00:24 np0005591760 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Automatic Boot Loader Update...
Jan 22 04:00:24 np0005591760 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 04:00:24 np0005591760 bootctl[668]: Couldn't find EFI system partition, skipping.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Automatic Boot Loader Update.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Security Auditing Service...
Jan 22 04:00:24 np0005591760 systemd[1]: Starting RPC Bind...
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Rebuild Journal Catalog...
Jan 22 04:00:24 np0005591760 auditd[674]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 22 04:00:24 np0005591760 auditd[674]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 22 04:00:24 np0005591760 systemd[1]: Started RPC Bind.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Rebuild Journal Catalog.
Jan 22 04:00:24 np0005591760 augenrules[679]: /sbin/augenrules: No change
Jan 22 04:00:24 np0005591760 augenrules[694]: No rules
Jan 22 04:00:24 np0005591760 augenrules[694]: enabled 1
Jan 22 04:00:24 np0005591760 augenrules[694]: failure 1
Jan 22 04:00:24 np0005591760 augenrules[694]: pid 674
Jan 22 04:00:24 np0005591760 augenrules[694]: rate_limit 0
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_limit 8192
Jan 22 04:00:24 np0005591760 augenrules[694]: lost 0
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog 3
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_wait_time 60000
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_wait_time_actual 0
Jan 22 04:00:24 np0005591760 augenrules[694]: enabled 1
Jan 22 04:00:24 np0005591760 augenrules[694]: failure 1
Jan 22 04:00:24 np0005591760 augenrules[694]: pid 674
Jan 22 04:00:24 np0005591760 augenrules[694]: rate_limit 0
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_limit 8192
Jan 22 04:00:24 np0005591760 augenrules[694]: lost 0
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog 2
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_wait_time 60000
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_wait_time_actual 0
Jan 22 04:00:24 np0005591760 augenrules[694]: enabled 1
Jan 22 04:00:24 np0005591760 augenrules[694]: failure 1
Jan 22 04:00:24 np0005591760 augenrules[694]: pid 674
Jan 22 04:00:24 np0005591760 augenrules[694]: rate_limit 0
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_limit 8192
Jan 22 04:00:24 np0005591760 augenrules[694]: lost 0
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog 2
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_wait_time 60000
Jan 22 04:00:24 np0005591760 augenrules[694]: backlog_wait_time_actual 0
Jan 22 04:00:24 np0005591760 systemd[1]: Started Security Auditing Service.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Rebuild Hardware Database.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Update is Completed...
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Update is Completed.
Jan 22 04:00:24 np0005591760 systemd-udevd[702]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 04:00:24 np0005591760 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target System Initialization.
Jan 22 04:00:24 np0005591760 systemd[1]: Started dnf makecache --timer.
Jan 22 04:00:24 np0005591760 systemd[1]: Started Daily rotation of log files.
Jan 22 04:00:24 np0005591760 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target Timer Units.
Jan 22 04:00:24 np0005591760 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 04:00:24 np0005591760 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target Socket Units.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting D-Bus System Message Bus...
Jan 22 04:00:24 np0005591760 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 04:00:24 np0005591760 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 22 04:00:24 np0005591760 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 04:00:24 np0005591760 systemd[1]: Started D-Bus System Message Bus.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target Basic System.
Jan 22 04:00:24 np0005591760 dbus-broker-lau[714]: Ready
Jan 22 04:00:24 np0005591760 systemd[1]: Starting NTP client/server...
Jan 22 04:00:24 np0005591760 systemd-udevd[717]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 22 04:00:24 np0005591760 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 22 04:00:24 np0005591760 systemd[1]: Starting IPv4 firewall with iptables...
Jan 22 04:00:24 np0005591760 systemd[1]: Started irqbalance daemon.
Jan 22 04:00:24 np0005591760 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 22 04:00:24 np0005591760 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 04:00:24 np0005591760 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 04:00:24 np0005591760 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target sshd-keygen.target.
Jan 22 04:00:24 np0005591760 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 22 04:00:24 np0005591760 systemd[1]: Reached target User and Group Name Lookups.
Jan 22 04:00:24 np0005591760 systemd[1]: Starting User Login Management...
Jan 22 04:00:24 np0005591760 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 22 04:00:24 np0005591760 chronyd[750]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 04:00:24 np0005591760 chronyd[750]: Loaded 0 symmetric keys
Jan 22 04:00:24 np0005591760 chronyd[750]: Using right/UTC timezone to obtain leap second data
Jan 22 04:00:24 np0005591760 chronyd[750]: Loaded seccomp filter (level 2)
Jan 22 04:00:24 np0005591760 systemd[1]: Started NTP client/server.
Jan 22 04:00:24 np0005591760 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 22 04:00:24 np0005591760 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 22 04:00:24 np0005591760 systemd-logind[747]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 04:00:24 np0005591760 systemd-logind[747]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 04:00:24 np0005591760 systemd-logind[747]: New seat seat0.
Jan 22 04:00:24 np0005591760 systemd[1]: Started User Login Management.
Jan 22 04:00:24 np0005591760 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Jan 22 04:00:24 np0005591760 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Jan 22 04:00:24 np0005591760 kernel: Console: switching to colour dummy device 80x25
Jan 22 04:00:24 np0005591760 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 22 04:00:24 np0005591760 kernel: [drm] features: -context_init
Jan 22 04:00:24 np0005591760 kernel: [drm] number of scanouts: 1
Jan 22 04:00:24 np0005591760 kernel: [drm] number of cap sets: 0
Jan 22 04:00:24 np0005591760 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Jan 22 04:00:24 np0005591760 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 22 04:00:24 np0005591760 kernel: Console: switching to colour frame buffer device 160x50
Jan 22 04:00:24 np0005591760 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 22 04:00:24 np0005591760 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Jan 22 04:00:24 np0005591760 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 22 04:00:24 np0005591760 iptables.init[740]: iptables: Applying firewall rules: [  OK  ]
Jan 22 04:00:24 np0005591760 systemd[1]: Finished IPv4 firewall with iptables.
Jan 22 04:00:25 np0005591760 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Jan 22 04:00:25 np0005591760 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 22 04:00:25 np0005591760 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 22 04:00:25 np0005591760 kernel: iTCO_vendor_support: vendor-support=0
Jan 22 04:00:25 np0005591760 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Jan 22 04:00:25 np0005591760 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Jan 22 04:00:25 np0005591760 kernel: kvm_amd: TSC scaling supported
Jan 22 04:00:25 np0005591760 kernel: kvm_amd: Nested Virtualization enabled
Jan 22 04:00:25 np0005591760 kernel: kvm_amd: Nested Paging enabled
Jan 22 04:00:25 np0005591760 kernel: kvm_amd: LBR virtualization supported
Jan 22 04:00:25 np0005591760 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Jan 22 04:00:25 np0005591760 kernel: kvm_amd: Virtual GIF supported
Jan 22 04:00:25 np0005591760 cloud-init[794]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 22 Jan 2026 09:00:25 +0000. Up 4.71 seconds.
Jan 22 04:00:25 np0005591760 systemd[1]: run-cloud\x2dinit-tmp-tmpphpxyj6_.mount: Deactivated successfully.
Jan 22 04:00:25 np0005591760 systemd[1]: Starting Hostname Service...
Jan 22 04:00:25 np0005591760 systemd[1]: Started Hostname Service.
Jan 22 04:00:25 np0005591760 systemd-hostnamed[808]: Hostname set to <np0005591760> (static)
Jan 22 04:00:25 np0005591760 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 22 04:00:25 np0005591760 systemd[1]: Reached target Preparation for Network.
Jan 22 04:00:25 np0005591760 systemd[1]: Starting Network Manager...
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.6935] NetworkManager (version 1.54.3-2.el9) is starting... (boot:236983fe-2283-446d-b460-fa27fee48ad8)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.6940] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7027] manager[0x55e65a8e2000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7058] hostname: hostname: using hostnamed
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7059] hostname: static hostname changed from (none) to "np0005591760"
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7062] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7157] manager[0x55e65a8e2000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7157] manager[0x55e65a8e2000]: rfkill: WWAN hardware radio set enabled
Jan 22 04:00:25 np0005591760 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7220] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7221] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7222] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7223] manager: Networking is enabled by state file
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7226] settings: Loaded settings plugin: keyfile (internal)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7243] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7268] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7280] dhcp: init: Using DHCP client 'internal'
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7283] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7296] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7305] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:00:25 np0005591760 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7325] device (lo): Activation: starting connection 'lo' (05d2baa5-0b49-41e6-a720-75a6ae73dfbc)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7334] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7339] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:00:25 np0005591760 systemd[1]: Started Network Manager.
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7364] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7378] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 04:00:25 np0005591760 systemd[1]: Reached target Network.
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7391] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7393] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 04:00:25 np0005591760 systemd[1]: Starting Network Manager Wait Online...
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7412] device (eth0): carrier: link connected
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7415] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7421] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7427] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7431] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7431] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7433] manager: NetworkManager state is now CONNECTING
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7434] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7448] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:00:25 np0005591760 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7455] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7458] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Jan 22 04:00:25 np0005591760 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7544] dhcp4 (eth0): state changed new lease, address=192.168.26.184
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7557] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7577] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7580] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 04:00:25 np0005591760 NetworkManager[812]: <info>  [1769072425.7584] device (lo): Activation: successful, device activated.
Jan 22 04:00:25 np0005591760 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 22 04:00:25 np0005591760 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 04:00:25 np0005591760 systemd[1]: Reached target NFS client services.
Jan 22 04:00:25 np0005591760 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 04:00:25 np0005591760 systemd[1]: Reached target Remote File Systems.
Jan 22 04:00:25 np0005591760 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 04:00:26 np0005591760 NetworkManager[812]: <info>  [1769072426.8646] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:00:27 np0005591760 NetworkManager[812]: <info>  [1769072427.9750] dhcp6 (eth0): state changed new lease, address=2001:db8::4b
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8097] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8140] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8142] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8146] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8149] device (eth0): Activation: successful, device activated.
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8155] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 04:00:29 np0005591760 NetworkManager[812]: <info>  [1769072429.8159] manager: startup complete
Jan 22 04:00:29 np0005591760 systemd[1]: Finished Network Manager Wait Online.
Jan 22 04:00:29 np0005591760 systemd[1]: Starting Cloud-init: Network Stage...
Jan 22 04:00:30 np0005591760 cloud-init[878]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 22 Jan 2026 09:00:30 +0000. Up 9.48 seconds.
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |  eth0  | True |        192.168.26.184        | 255.255.255.0 | global | fa:16:3e:67:af:5f |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |  eth0  | True |       2001:db8::4b/128       |       .       | global | fa:16:3e:67:af:5f |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |  eth0  | True | fe80::f816:3eff:fe67:af5f/64 |       .       |  link  | fa:16:3e:67:af:5f |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: | Route | Destination  |   Gateway   | Interface | Flags |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   1   | 2001:db8::1  |      ::     |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   2   | 2001:db8::4b |      ::     |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   3   |  fe80::/64   |      ::     |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   4   |     ::/0     | 2001:db8::1 |    eth0   |   UG  |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   6   |    local     |      ::     |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   7   |    local     |      ::     |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: |   8   |  multicast   |      ::     |    eth0   |   U   |
Jan 22 04:00:30 np0005591760 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Jan 22 04:00:30 np0005591760 chronyd[750]: Selected source 72.14.186.59 (2.centos.pool.ntp.org)
Jan 22 04:00:30 np0005591760 chronyd[750]: System clock TAI offset set to 37 seconds
Jan 22 04:00:31 np0005591760 cloud-init[878]: Generating public/private rsa key pair.
Jan 22 04:00:31 np0005591760 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 22 04:00:31 np0005591760 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 22 04:00:31 np0005591760 cloud-init[878]: The key fingerprint is:
Jan 22 04:00:31 np0005591760 cloud-init[878]: SHA256:aeZE+fJhZanXj5R5UG+ZIQryeEzdOQV62nIZSJ91NUo root@np0005591760
Jan 22 04:00:31 np0005591760 cloud-init[878]: The key's randomart image is:
Jan 22 04:00:31 np0005591760 cloud-init[878]: +---[RSA 3072]----+
Jan 22 04:00:31 np0005591760 cloud-init[878]: |       . o...E+==|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |        *.o.*==.B|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |       .o+ ++*ooo|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |       ..o ++.o= |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |        S =o.+= .|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |       = + oo. + |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |        . .   . .|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |                 |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |                 |
Jan 22 04:00:31 np0005591760 cloud-init[878]: +----[SHA256]-----+
Jan 22 04:00:31 np0005591760 cloud-init[878]: Generating public/private ecdsa key pair.
Jan 22 04:00:31 np0005591760 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 22 04:00:31 np0005591760 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 22 04:00:31 np0005591760 cloud-init[878]: The key fingerprint is:
Jan 22 04:00:31 np0005591760 cloud-init[878]: SHA256:Cc5DcvfREX/1JkefY8LcXFfF0FFGupt8qB4UPhetL8U root@np0005591760
Jan 22 04:00:31 np0005591760 cloud-init[878]: The key's randomart image is:
Jan 22 04:00:31 np0005591760 cloud-init[878]: +---[ECDSA 256]---+
Jan 22 04:00:31 np0005591760 cloud-init[878]: |            o..*/|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |           .ooo=X|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |    . + . . o+=*B|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |     * o o o .oX.|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |      + S . + + E|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |       .   . + * |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |            . * o|
Jan 22 04:00:31 np0005591760 cloud-init[878]: |             o o |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |           .o    |
Jan 22 04:00:31 np0005591760 cloud-init[878]: +----[SHA256]-----+
Jan 22 04:00:31 np0005591760 cloud-init[878]: Generating public/private ed25519 key pair.
Jan 22 04:00:31 np0005591760 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 22 04:00:31 np0005591760 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 22 04:00:31 np0005591760 cloud-init[878]: The key fingerprint is:
Jan 22 04:00:31 np0005591760 cloud-init[878]: SHA256:RHcrQMkouJJosSgcgpjGH8YDazvpi2mh3leBRM0FBWs root@np0005591760
Jan 22 04:00:31 np0005591760 cloud-init[878]: The key's randomart image is:
Jan 22 04:00:31 np0005591760 cloud-init[878]: +--[ED25519 256]--+
Jan 22 04:00:31 np0005591760 cloud-init[878]: |+oo...o*O+. .    |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |==o=...++o . .   |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |*+*.+..E. . .    |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |B++. ..o   .     |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |o=      S        |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |...    .         |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |...   .          |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |ooo  .           |
Jan 22 04:00:31 np0005591760 cloud-init[878]: |=o ..            |
Jan 22 04:00:31 np0005591760 cloud-init[878]: +----[SHA256]-----+
Jan 22 04:00:31 np0005591760 systemd[1]: Finished Cloud-init: Network Stage.
Jan 22 04:00:31 np0005591760 systemd[1]: Reached target Cloud-config availability.
Jan 22 04:00:31 np0005591760 systemd[1]: Reached target Network is Online.
Jan 22 04:00:31 np0005591760 systemd[1]: Starting Cloud-init: Config Stage...
Jan 22 04:00:31 np0005591760 systemd[1]: Starting Crash recovery kernel arming...
Jan 22 04:00:31 np0005591760 systemd[1]: Starting Notify NFS peers of a restart...
Jan 22 04:00:31 np0005591760 sm-notify[961]: Version 2.5.4 starting
Jan 22 04:00:31 np0005591760 systemd[1]: Starting System Logging Service...
Jan 22 04:00:31 np0005591760 systemd[1]: Starting OpenSSH server daemon...
Jan 22 04:00:31 np0005591760 systemd[1]: Starting Permit User Sessions...
Jan 22 04:00:31 np0005591760 systemd[1]: Started OpenSSH server daemon.
Jan 22 04:00:31 np0005591760 systemd[1]: Started Notify NFS peers of a restart.
Jan 22 04:00:31 np0005591760 systemd[1]: Finished Permit User Sessions.
Jan 22 04:00:31 np0005591760 systemd[1]: Started Command Scheduler.
Jan 22 04:00:31 np0005591760 systemd[1]: Started Getty on tty1.
Jan 22 04:00:31 np0005591760 systemd[1]: Started Serial Getty on ttyS0.
Jan 22 04:00:31 np0005591760 systemd[1]: Reached target Login Prompts.
Jan 22 04:00:31 np0005591760 rsyslogd[962]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="962" x-info="https://www.rsyslog.com"] start
Jan 22 04:00:31 np0005591760 systemd[1]: Started System Logging Service.
Jan 22 04:00:31 np0005591760 systemd[1]: Reached target Multi-User System.
Jan 22 04:00:31 np0005591760 rsyslogd[962]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 22 04:00:31 np0005591760 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 22 04:00:31 np0005591760 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 22 04:00:31 np0005591760 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 22 04:00:31 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:00:31 np0005591760 kdumpctl[981]: kdump: No kdump initial ramdisk found.
Jan 22 04:00:31 np0005591760 kdumpctl[981]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 22 04:00:31 np0005591760 cloud-init[1108]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 22 Jan 2026 09:00:31 +0000. Up 10.86 seconds.
Jan 22 04:00:31 np0005591760 systemd[1]: Finished Cloud-init: Config Stage.
Jan 22 04:00:31 np0005591760 systemd[1]: Starting Cloud-init: Final Stage...
Jan 22 04:00:31 np0005591760 dracut[1241]: dracut-057-102.git20250818.el9
Jan 22 04:00:31 np0005591760 cloud-init[1263]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 22 Jan 2026 09:00:31 +0000. Up 11.21 seconds.
Jan 22 04:00:31 np0005591760 dracut[1243]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 22 04:00:31 np0005591760 cloud-init[1306]: #############################################################
Jan 22 04:00:31 np0005591760 cloud-init[1311]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 22 04:00:31 np0005591760 cloud-init[1316]: 256 SHA256:Cc5DcvfREX/1JkefY8LcXFfF0FFGupt8qB4UPhetL8U root@np0005591760 (ECDSA)
Jan 22 04:00:31 np0005591760 cloud-init[1318]: 256 SHA256:RHcrQMkouJJosSgcgpjGH8YDazvpi2mh3leBRM0FBWs root@np0005591760 (ED25519)
Jan 22 04:00:31 np0005591760 cloud-init[1320]: 3072 SHA256:aeZE+fJhZanXj5R5UG+ZIQryeEzdOQV62nIZSJ91NUo root@np0005591760 (RSA)
Jan 22 04:00:31 np0005591760 cloud-init[1321]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 22 04:00:31 np0005591760 cloud-init[1322]: #############################################################
Jan 22 04:00:31 np0005591760 cloud-init[1263]: Cloud-init v. 24.4-8.el9 finished at Thu, 22 Jan 2026 09:00:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.34 seconds
Jan 22 04:00:31 np0005591760 systemd[1]: Finished Cloud-init: Final Stage.
Jan 22 04:00:31 np0005591760 systemd[1]: Reached target Cloud-init target.
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: memstrack is not available
Jan 22 04:00:32 np0005591760 dracut[1243]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 04:00:32 np0005591760 dracut[1243]: memstrack is not available
Jan 22 04:00:32 np0005591760 dracut[1243]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 04:00:32 np0005591760 dracut[1243]: *** Including module: systemd ***
Jan 22 04:00:33 np0005591760 dracut[1243]: *** Including module: fips ***
Jan 22 04:00:33 np0005591760 dracut[1243]: *** Including module: systemd-initrd ***
Jan 22 04:00:33 np0005591760 dracut[1243]: *** Including module: i18n ***
Jan 22 04:00:33 np0005591760 dracut[1243]: *** Including module: drm ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: prefixdevname ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: kernel-modules ***
Jan 22 04:00:34 np0005591760 kernel: block vda: the capability attribute has been deprecated.
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: kernel-modules-extra ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: qemu ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: fstab-sys ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: rootfs-block ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: terminfo ***
Jan 22 04:00:34 np0005591760 dracut[1243]: *** Including module: udev-rules ***
Jan 22 04:00:35 np0005591760 dracut[1243]: Skipping udev rule: 91-permissions.rules
Jan 22 04:00:35 np0005591760 dracut[1243]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: virtiofs ***
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: dracut-systemd ***
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: usrmount ***
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: base ***
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: fs-lib ***
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: kdumpbase ***
Jan 22 04:00:35 np0005591760 irqbalance[742]: Cannot change IRQ 45 affinity: Operation not permitted
Jan 22 04:00:35 np0005591760 irqbalance[742]: IRQ 45 affinity is now unmanaged
Jan 22 04:00:35 np0005591760 irqbalance[742]: Cannot change IRQ 44 affinity: Operation not permitted
Jan 22 04:00:35 np0005591760 irqbalance[742]: IRQ 44 affinity is now unmanaged
Jan 22 04:00:35 np0005591760 irqbalance[742]: Cannot change IRQ 42 affinity: Operation not permitted
Jan 22 04:00:35 np0005591760 irqbalance[742]: IRQ 42 affinity is now unmanaged
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 22 04:00:35 np0005591760 dracut[1243]:  microcode_ctl module: mangling fw_dir
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 22 04:00:35 np0005591760 dracut[1243]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: openssl ***
Jan 22 04:00:35 np0005591760 dracut[1243]: *** Including module: shutdown ***
Jan 22 04:00:36 np0005591760 dracut[1243]: *** Including module: squash ***
Jan 22 04:00:36 np0005591760 dracut[1243]: *** Including modules done ***
Jan 22 04:00:36 np0005591760 dracut[1243]: *** Installing kernel module dependencies ***
Jan 22 04:00:36 np0005591760 dracut[1243]: *** Installing kernel module dependencies done ***
Jan 22 04:00:36 np0005591760 dracut[1243]: *** Resolving executable dependencies ***
Jan 22 04:00:37 np0005591760 dracut[1243]: *** Resolving executable dependencies done ***
Jan 22 04:00:37 np0005591760 dracut[1243]: *** Generating early-microcode cpio image ***
Jan 22 04:00:37 np0005591760 dracut[1243]: *** Store current command line parameters ***
Jan 22 04:00:37 np0005591760 dracut[1243]: Stored kernel commandline:
Jan 22 04:00:37 np0005591760 dracut[1243]: No dracut internal kernel commandline stored in the initramfs
Jan 22 04:00:38 np0005591760 dracut[1243]: *** Install squash loader ***
Jan 22 04:00:38 np0005591760 dracut[1243]: *** Squashing the files inside the initramfs ***
Jan 22 04:00:39 np0005591760 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 04:00:40 np0005591760 dracut[1243]: *** Squashing the files inside the initramfs done ***
Jan 22 04:00:40 np0005591760 dracut[1243]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 22 04:00:40 np0005591760 dracut[1243]: *** Hardlinking files ***
Jan 22 04:00:40 np0005591760 dracut[1243]: *** Hardlinking files done ***
Jan 22 04:00:40 np0005591760 dracut[1243]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 22 04:00:40 np0005591760 kdumpctl[981]: kdump: kexec: loaded kdump kernel
Jan 22 04:00:40 np0005591760 kdumpctl[981]: kdump: Starting kdump: [OK]
Jan 22 04:00:40 np0005591760 systemd[1]: Finished Crash recovery kernel arming.
Jan 22 04:00:40 np0005591760 systemd[1]: Startup finished in 1.180s (kernel) + 1.978s (initrd) + 17.151s (userspace) = 20.310s.
Jan 22 04:00:55 np0005591760 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 04:01:37 np0005591760 chronyd[750]: Selected source 159.203.82.102 (2.centos.pool.ntp.org)
Jan 22 04:01:58 np0005591760 systemd[1]: Created slice User Slice of UID 1000.
Jan 22 04:01:58 np0005591760 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 22 04:01:58 np0005591760 systemd-logind[747]: New session 1 of user zuul.
Jan 22 04:01:59 np0005591760 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 22 04:01:59 np0005591760 systemd[1]: Starting User Manager for UID 1000...
Jan 22 04:01:59 np0005591760 systemd[4396]: Queued start job for default target Main User Target.
Jan 22 04:01:59 np0005591760 systemd[4396]: Created slice User Application Slice.
Jan 22 04:01:59 np0005591760 systemd[4396]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 04:01:59 np0005591760 systemd[4396]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 04:01:59 np0005591760 systemd[4396]: Reached target Paths.
Jan 22 04:01:59 np0005591760 systemd[4396]: Reached target Timers.
Jan 22 04:01:59 np0005591760 systemd[4396]: Starting D-Bus User Message Bus Socket...
Jan 22 04:01:59 np0005591760 systemd[4396]: Starting Create User's Volatile Files and Directories...
Jan 22 04:01:59 np0005591760 systemd[4396]: Finished Create User's Volatile Files and Directories.
Jan 22 04:01:59 np0005591760 systemd[4396]: Listening on D-Bus User Message Bus Socket.
Jan 22 04:01:59 np0005591760 systemd[4396]: Reached target Sockets.
Jan 22 04:01:59 np0005591760 systemd[4396]: Reached target Basic System.
Jan 22 04:01:59 np0005591760 systemd[4396]: Reached target Main User Target.
Jan 22 04:01:59 np0005591760 systemd[4396]: Startup finished in 92ms.
Jan 22 04:01:59 np0005591760 systemd[1]: Started User Manager for UID 1000.
Jan 22 04:01:59 np0005591760 systemd[1]: Started Session 1 of User zuul.
Jan 22 04:01:59 np0005591760 python3[4478]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:02:01 np0005591760 python3[4506]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:02:07 np0005591760 python3[4560]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:02:07 np0005591760 python3[4600]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 22 04:02:09 np0005591760 python3[4626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDlMRuJHMYjRIwgFF7czfU/mQMp6kd/gekboAQOEyPZmFQ60ialgxg9ko3arflxh6BUDY9IRw5tg9Bc05rdPoHNVoypQr/DoxSfvsU84qPJm+lDycIqATeh/aT4guxaryYYTWBZ4qDDNJ35iJKBI+7e8DCUN5iq/pdAbiNSXUkQ/mE/YyPwnoX/VfLmek3usQJ/7ks+f6SDf9imXAGwT8SyPYwF+zBEuiCwyHajZ7DAyPYxASuh7iKE6DtE4RAjr3e6tw4K+9sA35hpbH+WT9EhJpLUdfBpl/QToPLojuyCl4dAuCl95OwtPOeUYqdk+JFHXpD/37JeXcYPjNEoLM8nt6W20iBSKdTVjXU5ZDirWEMkSGLei0FtsZXsdLvA/YQSMBlGd9t1Ex6YkkmpgrcuppALH+M1an0gLxQnL4d1uQWn8dD3uwJfOw5KbMPjT2zVrTvRc2SpKcEsAiyiqYXq45wiJmyMXbJHUeTJ8OIbMjvRn3iwGQr3A/Hpddfw/E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:09 np0005591760 python3[4650]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:09 np0005591760 python3[4749]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:09 np0005591760 python3[4820]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769072529.5107276-251-80574400427953/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cf287e8594d94086a6e561a1533072f0_id_rsa follow=False checksum=5fe97cb15fc153784845243a4fc540b8e8f96206 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:10 np0005591760 python3[4943]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:10 np0005591760 python3[5014]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769072530.1538737-306-69897390784644/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cf287e8594d94086a6e561a1533072f0_id_rsa.pub follow=False checksum=31c396396a28f842c445fba0dac187f292042cc5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:11 np0005591760 python3[5062]: ansible-ping Invoked with data=pong
Jan 22 04:02:12 np0005591760 python3[5086]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:02:13 np0005591760 python3[5140]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 22 04:02:14 np0005591760 python3[5172]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:14 np0005591760 python3[5196]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:15 np0005591760 python3[5220]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:15 np0005591760 python3[5244]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:15 np0005591760 python3[5268]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:15 np0005591760 python3[5292]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:17 np0005591760 python3[5318]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:17 np0005591760 python3[5396]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:17 np0005591760 python3[5469]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769072537.1451476-31-12777583277796/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:18 np0005591760 python3[5517]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:18 np0005591760 python3[5541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:18 np0005591760 python3[5565]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:18 np0005591760 python3[5589]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:19 np0005591760 python3[5613]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:19 np0005591760 python3[5637]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:19 np0005591760 python3[5661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:19 np0005591760 python3[5685]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:19 np0005591760 python3[5709]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:20 np0005591760 python3[5733]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:20 np0005591760 python3[5757]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:20 np0005591760 python3[5781]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:20 np0005591760 python3[5805]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:20 np0005591760 python3[5829]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:21 np0005591760 python3[5853]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:21 np0005591760 python3[5877]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:21 np0005591760 python3[5901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:21 np0005591760 python3[5925]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:21 np0005591760 python3[5949]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:21 np0005591760 python3[5973]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:22 np0005591760 python3[5997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:22 np0005591760 python3[6021]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:22 np0005591760 python3[6045]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:22 np0005591760 python3[6069]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:22 np0005591760 python3[6093]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:23 np0005591760 python3[6117]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:02:25 np0005591760 irqbalance[742]: Cannot change IRQ 43 affinity: Operation not permitted
Jan 22 04:02:25 np0005591760 irqbalance[742]: IRQ 43 affinity is now unmanaged
Jan 22 04:02:25 np0005591760 python3[6143]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 04:02:25 np0005591760 systemd[1]: Starting Time & Date Service...
Jan 22 04:02:25 np0005591760 systemd[1]: Started Time & Date Service.
Jan 22 04:02:25 np0005591760 systemd-timedated[6145]: Changed time zone to 'UTC' (UTC).
Jan 22 04:02:27 np0005591760 python3[6174]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:27 np0005591760 python3[6250]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:27 np0005591760 python3[6321]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769072547.2849698-251-40766008119475/source _original_basename=tmpqlf58a2k follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:28 np0005591760 python3[6421]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:28 np0005591760 python3[6492]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769072547.9011087-301-4184148129763/source _original_basename=tmp660mh2hp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:29 np0005591760 python3[6594]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:29 np0005591760 python3[6667]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769072548.8162057-381-275268902569821/source _original_basename=tmpkxjeeewc follow=False checksum=cebf3bdc9484125e5a24c36e6aa7f8e402ee8739 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:29 np0005591760 python3[6715]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:02:29 np0005591760 python3[6741]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:02:30 np0005591760 python3[6821]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:02:30 np0005591760 python3[6894]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769072549.958203-451-201748762973774/source _original_basename=tmpgofbboi3 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:30 np0005591760 python3[6945]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e4f-9ce5-4910-b3ff-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:02:31 np0005591760 python3[6973]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e4f-9ce5-4910-b3ff-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 22 04:02:32 np0005591760 python3[7001]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:47 np0005591760 python3[7027]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:02:55 np0005591760 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Jan 22 04:03:17 np0005591760 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Jan 22 04:03:17 np0005591760 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4066] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 04:03:17 np0005591760 systemd-udevd[7030]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4172] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4192] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4195] device (eth1): carrier: link connected
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4196] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4200] policy: auto-activating connection 'Wired connection 1' (2bec87fb-f2ee-3ca7-abd5-18947069d89e)
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4203] device (eth1): Activation: starting connection 'Wired connection 1' (2bec87fb-f2ee-3ca7-abd5-18947069d89e)
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4203] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4205] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4208] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:03:17 np0005591760 NetworkManager[812]: <info>  [1769072597.4211] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:03:18 np0005591760 python3[7057]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e4f-9ce5-e59a-b3b6-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:03:27 np0005591760 python3[7137]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:03:27 np0005591760 python3[7210]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769072607.5604715-113-271600959065079/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=bcec6fa98718af8883987c8ad76f328a9df6757f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:03:28 np0005591760 python3[7260]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:03:28 np0005591760 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 04:03:28 np0005591760 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 04:03:28 np0005591760 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5582] caught SIGTERM, shutting down normally.
Jan 22 04:03:28 np0005591760 systemd[1]: Stopping Network Manager...
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5588] dhcp4 (eth0): canceled DHCP transaction
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5588] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5588] dhcp4 (eth0): state changed no lease
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5589] dhcp6 (eth0): canceled DHCP transaction
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5589] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5589] dhcp6 (eth0): state changed no lease
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5591] manager: NetworkManager state is now CONNECTING
Jan 22 04:03:28 np0005591760 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5683] dhcp4 (eth1): canceled DHCP transaction
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5683] dhcp4 (eth1): state changed no lease
Jan 22 04:03:28 np0005591760 NetworkManager[812]: <info>  [1769072608.5706] exiting (success)
Jan 22 04:03:28 np0005591760 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 04:03:28 np0005591760 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 04:03:28 np0005591760 systemd[1]: Stopped Network Manager.
Jan 22 04:03:28 np0005591760 systemd[1]: Starting Network Manager...
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6072] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:236983fe-2283-446d-b460-fa27fee48ad8)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6073] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6124] manager[0x5636093cc000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 04:03:28 np0005591760 systemd[1]: Starting Hostname Service...
Jan 22 04:03:28 np0005591760 systemd[1]: Started Hostname Service.
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6642] hostname: hostname: using hostnamed
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6642] hostname: static hostname changed from (none) to "np0005591760"
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6644] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6646] manager[0x5636093cc000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6647] manager[0x5636093cc000]: rfkill: WWAN hardware radio set enabled
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6665] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6665] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6665] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6665] manager: Networking is enabled by state file
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6667] settings: Loaded settings plugin: keyfile (internal)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6670] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6688] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6694] dhcp: init: Using DHCP client 'internal'
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6696] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6700] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6703] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6709] device (lo): Activation: starting connection 'lo' (05d2baa5-0b49-41e6-a720-75a6ae73dfbc)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6714] device (eth0): carrier: link connected
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6719] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6722] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6723] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6727] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6731] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6735] device (eth1): carrier: link connected
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6739] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6742] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2bec87fb-f2ee-3ca7-abd5-18947069d89e) (indicated)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6743] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6746] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6750] device (eth1): Activation: starting connection 'Wired connection 1' (2bec87fb-f2ee-3ca7-abd5-18947069d89e)
Jan 22 04:03:28 np0005591760 systemd[1]: Started Network Manager.
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6764] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6767] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6768] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6769] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6771] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6772] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6774] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6776] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6778] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6782] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6784] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6786] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6788] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6792] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6796] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6808] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6809] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 04:03:28 np0005591760 systemd[1]: Starting Network Manager Wait Online...
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6811] device (lo): Activation: successful, device activated.
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6815] dhcp4 (eth0): state changed new lease, address=192.168.26.184
Jan 22 04:03:28 np0005591760 NetworkManager[7272]: <info>  [1769072608.6819] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 04:03:28 np0005591760 python3[7332]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e4f-9ce5-e59a-b3b6-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7385] dhcp6 (eth0): state changed new lease, address=2001:db8::4b
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7393] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7417] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7418] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7420] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7422] device (eth0): Activation: successful, device activated.
Jan 22 04:03:29 np0005591760 NetworkManager[7272]: <info>  [1769072609.7425] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 04:03:39 np0005591760 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 04:03:58 np0005591760 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.5848] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 04:04:13 np0005591760 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 04:04:13 np0005591760 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6012] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6013] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6017] device (eth1): Activation: successful, device activated.
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6021] manager: startup complete
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6022] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <warn>  [1769072653.6024] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6029] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 systemd[1]: Finished Network Manager Wait Online.
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6088] dhcp4 (eth1): canceled DHCP transaction
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6088] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6088] dhcp4 (eth1): state changed no lease
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6096] policy: auto-activating connection 'ci-private-network' (4e09e16e-c843-5077-b7e9-8e1d2a25bbdb)
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6099] device (eth1): Activation: starting connection 'ci-private-network' (4e09e16e-c843-5077-b7e9-8e1d2a25bbdb)
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6100] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6101] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6105] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6111] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6132] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6133] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:04:13 np0005591760 NetworkManager[7272]: <info>  [1769072653.6136] device (eth1): Activation: successful, device activated.
Jan 22 04:04:23 np0005591760 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 04:04:28 np0005591760 systemd-logind[747]: Session 1 logged out. Waiting for processes to exit.
Jan 22 04:04:46 np0005591760 systemd[4396]: Starting Mark boot as successful...
Jan 22 04:04:47 np0005591760 systemd[4396]: Finished Mark boot as successful.
Jan 22 04:04:49 np0005591760 systemd-logind[747]: New session 3 of user zuul.
Jan 22 04:04:49 np0005591760 systemd[1]: Started Session 3 of User zuul.
Jan 22 04:04:50 np0005591760 python3[7461]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:04:50 np0005591760 python3[7534]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769072689.8373976-379-197185951408880/source _original_basename=tmpveqpnvzu follow=False checksum=4107463a008ffd1ee3e83966dce6317b3b41e8b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:04:52 np0005591760 systemd[1]: session-3.scope: Deactivated successfully.
Jan 22 04:04:52 np0005591760 systemd-logind[747]: Session 3 logged out. Waiting for processes to exit.
Jan 22 04:04:52 np0005591760 systemd-logind[747]: Removed session 3.
Jan 22 04:07:46 np0005591760 systemd[4396]: Created slice User Background Tasks Slice.
Jan 22 04:07:46 np0005591760 systemd[4396]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 04:07:47 np0005591760 systemd[4396]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 04:09:45 np0005591760 systemd-logind[747]: New session 4 of user zuul.
Jan 22 04:09:45 np0005591760 systemd[1]: Started Session 4 of User zuul.
Jan 22 04:09:45 np0005591760 python3[7591]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e4f-9ce5-b8c1-a785-00000000216f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:09:46 np0005591760 python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:09:46 np0005591760 python3[7646]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:09:46 np0005591760 python3[7672]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:09:46 np0005591760 python3[7698]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:09:47 np0005591760 python3[7724]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:09:47 np0005591760 python3[7802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:09:48 np0005591760 python3[7875]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769072987.6187696-539-76991439113757/source _original_basename=tmphbyuqihi follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:09:49 np0005591760 python3[7925]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:09:49 np0005591760 systemd[1]: Reloading.
Jan 22 04:09:49 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:09:50 np0005591760 python3[7981]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 22 04:09:50 np0005591760 python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:09:50 np0005591760 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:09:51 np0005591760 python3[8063]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:09:51 np0005591760 python3[8091]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:09:52 np0005591760 python3[8118]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e4f-9ce5-b8c1-a785-000000002176-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:09:52 np0005591760 python3[8148]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:09:55 np0005591760 systemd[1]: session-4.scope: Deactivated successfully.
Jan 22 04:09:55 np0005591760 systemd[1]: session-4.scope: Consumed 2.945s CPU time.
Jan 22 04:09:55 np0005591760 systemd-logind[747]: Session 4 logged out. Waiting for processes to exit.
Jan 22 04:09:55 np0005591760 systemd-logind[747]: Removed session 4.
Jan 22 04:09:56 np0005591760 systemd-logind[747]: New session 5 of user zuul.
Jan 22 04:09:56 np0005591760 systemd[1]: Started Session 5 of User zuul.
Jan 22 04:09:56 np0005591760 python3[8183]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 04:10:07 np0005591760 setsebool[8227]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 22 04:10:07 np0005591760 setsebool[8227]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 22 04:10:15 np0005591760 kernel: SELinux:  Converting 387 SID table entries...
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:10:15 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:10:22 np0005591760 kernel: SELinux:  Converting 390 SID table entries...
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:10:22 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:10:34 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 04:10:34 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:10:34 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:10:34 np0005591760 systemd[1]: Reloading.
Jan 22 04:10:34 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:10:34 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:10:38 np0005591760 python3[14012]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e4f-9ce5-c8dd-b18c-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:10:38 np0005591760 kernel: evm: overlay not supported
Jan 22 04:10:38 np0005591760 systemd[4396]: Starting D-Bus User Message Bus...
Jan 22 04:10:38 np0005591760 dbus-broker-launch[14678]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 22 04:10:38 np0005591760 dbus-broker-launch[14678]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 22 04:10:38 np0005591760 systemd[4396]: Started D-Bus User Message Bus.
Jan 22 04:10:38 np0005591760 dbus-broker-lau[14678]: Ready
Jan 22 04:10:38 np0005591760 systemd[4396]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 04:10:38 np0005591760 systemd[4396]: Created slice Slice /user.
Jan 22 04:10:38 np0005591760 systemd[4396]: podman-14600.scope: unit configures an IP firewall, but not running as root.
Jan 22 04:10:38 np0005591760 systemd[4396]: (This warning is only shown for the first unit using IP firewalling.)
Jan 22 04:10:38 np0005591760 systemd[4396]: Started podman-14600.scope.
Jan 22 04:10:39 np0005591760 systemd[4396]: Started podman-pause-9a3006af.scope.
Jan 22 04:10:39 np0005591760 systemd[1]: session-5.scope: Deactivated successfully.
Jan 22 04:10:39 np0005591760 systemd[1]: session-5.scope: Consumed 30.060s CPU time.
Jan 22 04:10:40 np0005591760 systemd-logind[747]: Session 5 logged out. Waiting for processes to exit.
Jan 22 04:10:40 np0005591760 systemd-logind[747]: Removed session 5.
Jan 22 04:11:00 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:11:00 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:11:00 np0005591760 systemd[1]: man-db-cache-update.service: Consumed 32.283s CPU time.
Jan 22 04:11:00 np0005591760 systemd[1]: run-r2e44c3bb87fe41a29c18db4d63a217b6.service: Deactivated successfully.
Jan 22 04:11:05 np0005591760 systemd-logind[747]: New session 6 of user zuul.
Jan 22 04:11:05 np0005591760 systemd[1]: Started Session 6 of User zuul.
Jan 22 04:11:05 np0005591760 python3[29668]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ7acfP3+x1cL69/ksfFQyqWrZMrgUMZrDd0CF5wqelljlFTSN4wbIyelUw7pI21/LAuvact75cdTAckFyYsBw= zuul@np0005591759#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:11:06 np0005591760 python3[29694]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ7acfP3+x1cL69/ksfFQyqWrZMrgUMZrDd0CF5wqelljlFTSN4wbIyelUw7pI21/LAuvact75cdTAckFyYsBw= zuul@np0005591759#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:11:06 np0005591760 python3[29720]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005591760 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 22 04:11:07 np0005591760 python3[29754]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ7acfP3+x1cL69/ksfFQyqWrZMrgUMZrDd0CF5wqelljlFTSN4wbIyelUw7pI21/LAuvact75cdTAckFyYsBw= zuul@np0005591759#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 04:11:07 np0005591760 python3[29832]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:11:07 np0005591760 python3[29905]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769073067.2891624-152-76458767286214/source _original_basename=tmpoa55o99w follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:11:08 np0005591760 python3[29955]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 22 04:11:08 np0005591760 systemd[1]: Starting Hostname Service...
Jan 22 04:11:08 np0005591760 systemd[1]: Started Hostname Service.
Jan 22 04:11:08 np0005591760 systemd-hostnamed[29959]: Changed pretty hostname to 'compute-0'
Jan 22 04:11:08 np0005591760 systemd-hostnamed[29959]: Hostname set to <compute-0> (static)
Jan 22 04:11:08 np0005591760 NetworkManager[7272]: <info>  [1769073068.5656] hostname: static hostname changed from "np0005591760" to "compute-0"
Jan 22 04:11:08 np0005591760 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 04:11:08 np0005591760 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 04:11:08 np0005591760 systemd[1]: session-6.scope: Deactivated successfully.
Jan 22 04:11:08 np0005591760 systemd[1]: session-6.scope: Consumed 1.627s CPU time.
Jan 22 04:11:08 np0005591760 systemd-logind[747]: Session 6 logged out. Waiting for processes to exit.
Jan 22 04:11:08 np0005591760 systemd-logind[747]: Removed session 6.
Jan 22 04:11:18 np0005591760 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 04:11:38 np0005591760 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 04:14:21 np0005591760 systemd-logind[747]: New session 7 of user zuul.
Jan 22 04:14:21 np0005591760 systemd[1]: Started Session 7 of User zuul.
Jan 22 04:14:21 np0005591760 python3[30055]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:14:23 np0005591760 python3[30167]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:23 np0005591760 python3[30240]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=delorean.repo follow=False checksum=1d7412093fdea43b5454099227a576288791d9ce backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:23 np0005591760 python3[30266]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:23 np0005591760 python3[30339]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=50a3fd92f8bf68f65d4644f7ea4a784e3eaa0ad5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:24 np0005591760 python3[30365]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:24 np0005591760 python3[30438]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:24 np0005591760 python3[30464]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:24 np0005591760 python3[30537]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:24 np0005591760 python3[30563]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:25 np0005591760 python3[30636]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:25 np0005591760 python3[30662]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:25 np0005591760 python3[30735]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:25 np0005591760 python3[30761]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:14:25 np0005591760 python3[30834]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769073262.9008026-34402-120963824935979/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:14:35 np0005591760 python3[30892]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:15:46 np0005591760 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 22 04:15:47 np0005591760 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 22 04:15:47 np0005591760 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 22 04:15:47 np0005591760 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 22 04:19:34 np0005591760 systemd-logind[747]: Session 7 logged out. Waiting for processes to exit.
Jan 22 04:19:34 np0005591760 systemd[1]: session-7.scope: Deactivated successfully.
Jan 22 04:19:34 np0005591760 systemd[1]: session-7.scope: Consumed 3.404s CPU time.
Jan 22 04:19:34 np0005591760 systemd-logind[747]: Removed session 7.
Jan 22 04:24:59 np0005591760 systemd-logind[747]: New session 8 of user zuul.
Jan 22 04:24:59 np0005591760 systemd[1]: Started Session 8 of User zuul.
Jan 22 04:25:00 np0005591760 python3.9[31051]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:25:01 np0005591760 python3.9[31232]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:25:09 np0005591760 systemd[1]: session-8.scope: Deactivated successfully.
Jan 22 04:25:09 np0005591760 systemd[1]: session-8.scope: Consumed 6.249s CPU time.
Jan 22 04:25:09 np0005591760 systemd-logind[747]: Session 8 logged out. Waiting for processes to exit.
Jan 22 04:25:09 np0005591760 systemd-logind[747]: Removed session 8.
Jan 22 04:25:24 np0005591760 systemd-logind[747]: New session 9 of user zuul.
Jan 22 04:25:24 np0005591760 systemd[1]: Started Session 9 of User zuul.
Jan 22 04:25:25 np0005591760 python3.9[31444]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 04:25:26 np0005591760 python3.9[31618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:25:27 np0005591760 python3.9[31770]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:25:27 np0005591760 python3.9[31923]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:25:28 np0005591760 python3.9[32075]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:25:28 np0005591760 python3.9[32227]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:25:29 np0005591760 python3.9[32350]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769073928.6622589-172-35137611560382/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:25:30 np0005591760 python3.9[32502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:25:30 np0005591760 python3.9[32658]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:25:31 np0005591760 python3.9[32810]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:25:31 np0005591760 python3.9[32960]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:25:33 np0005591760 python3.9[33213]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:25:34 np0005591760 python3.9[33363]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:25:35 np0005591760 python3.9[33517]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:25:36 np0005591760 python3.9[33675]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:25:37 np0005591760 python3.9[33759]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:26:49 np0005591760 systemd[1]: Reloading.
Jan 22 04:26:49 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:26:49 np0005591760 systemd[1]: Starting dnf makecache...
Jan 22 04:26:49 np0005591760 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 22 04:26:49 np0005591760 dnf[33971]: Failed determining last makecache time.
Jan 22 04:26:49 np0005591760 systemd[1]: Reloading.
Jan 22 04:26:49 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:26:49 np0005591760 dnf[33971]: delorean-openstack-barbican-42b4c41831408a8e323  20 kB/s | 3.0 kB     00:00
Jan 22 04:26:49 np0005591760 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 22 04:26:50 np0005591760 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 22 04:26:50 np0005591760 systemd[1]: Reloading.
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:50 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:26:50 np0005591760 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-openstack-cinder-1c00d6490d88e436f26ef  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:50 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:26:50 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:26:50 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-python-stevedore-c4acc5639fd2329372142  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-python-cloudkitty-tests-tempest-2c80f8  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-os-refresh-config-9bfc52b5049be2d8de61  22 kB/s | 3.0 kB     00:00
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  20 kB/s | 3.0 kB     00:00
Jan 22 04:26:50 np0005591760 dnf[33971]: delorean-python-designate-tests-tempest-347fdbc  23 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-openstack-glance-1fd12c29b339f30fe823e  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-openstack-manila-3c01b7181572c95dac462  22 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-python-whitebox-neutron-tests-tempest-  22 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-openstack-octavia-ba397f07a7331190208c  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-openstack-watcher-c014f81a8647287f6dcc  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:51 np0005591760 dnf[33971]: delorean-ansible-config_template-5ccaa22121a7ff  20 kB/s | 3.0 kB     00:00
Jan 22 04:26:52 np0005591760 dnf[33971]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:52 np0005591760 dnf[33971]: delorean-openstack-swift-dc98a8463506ac520c469a  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:52 np0005591760 dnf[33971]: delorean-python-tempestconf-8515371b7cceebd4282  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:52 np0005591760 dnf[33971]: delorean-openstack-heat-ui-013accbfd179753bc3f0  21 kB/s | 3.0 kB     00:00
Jan 22 04:26:52 np0005591760 dnf[33971]: CentOS Stream 9 - BaseOS                         13 kB/s | 6.7 kB     00:00
Jan 22 04:26:53 np0005591760 dnf[33971]: CentOS Stream 9 - AppStream                     9.0 kB/s | 6.8 kB     00:00
Jan 22 04:26:55 np0005591760 dnf[33971]: CentOS Stream 9 - CRB                           3.5 kB/s | 6.6 kB     00:01
Jan 22 04:26:56 np0005591760 dnf[33971]: CentOS Stream 9 - Extras packages                17 kB/s | 7.3 kB     00:00
Jan 22 04:26:56 np0005591760 dnf[33971]: dlrn-antelope-testing                            22 kB/s | 3.0 kB     00:00
Jan 22 04:26:56 np0005591760 dnf[33971]: dlrn-antelope-build-deps                         22 kB/s | 3.0 kB     00:00
Jan 22 04:26:56 np0005591760 dnf[33971]: centos9-rabbitmq                                7.0 kB/s | 3.0 kB     00:00
Jan 22 04:26:57 np0005591760 dnf[33971]: centos9-storage                                 6.4 kB/s | 3.0 kB     00:00
Jan 22 04:26:57 np0005591760 dnf[33971]: centos9-opstools                                7.1 kB/s | 3.0 kB     00:00
Jan 22 04:26:59 np0005591760 dnf[33971]: NFV SIG OpenvSwitch                             1.6 kB/s | 3.0 kB     00:01
Jan 22 04:27:00 np0005591760 dnf[33971]: repo-setup-centos-appstream                      10 kB/s | 4.4 kB     00:00
Jan 22 04:27:00 np0005591760 dnf[33971]: repo-setup-centos-baseos                        9.1 kB/s | 3.9 kB     00:00
Jan 22 04:27:01 np0005591760 dnf[33971]: repo-setup-centos-highavailability              3.2 kB/s | 3.9 kB     00:01
Jan 22 04:27:02 np0005591760 dnf[33971]: repo-setup-centos-powertools                     10 kB/s | 4.3 kB     00:00
Jan 22 04:27:03 np0005591760 dnf[33971]: Extra Packages for Enterprise Linux 9 - x86_64   20 kB/s |  25 kB     00:01
Jan 22 04:27:04 np0005591760 dnf[33971]: Metadata cache created.
Jan 22 04:27:04 np0005591760 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 04:27:04 np0005591760 systemd[1]: Finished dnf makecache.
Jan 22 04:27:04 np0005591760 systemd[1]: dnf-makecache.service: Consumed 1.303s CPU time.
Jan 22 04:27:34 np0005591760 kernel: SELinux:  Converting 2724 SID table entries...
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:27:34 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:27:34 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 22 04:27:34 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:27:34 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:27:34 np0005591760 systemd[1]: Reloading.
Jan 22 04:27:35 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:27:35 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:27:35 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:27:35 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:27:35 np0005591760 systemd[1]: run-re3fd201c2a714063b4821839749c2bc0.service: Deactivated successfully.
Jan 22 04:27:35 np0005591760 python3.9[35299]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:27:37 np0005591760 python3.9[35580]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 04:27:38 np0005591760 python3.9[35732]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 04:27:39 np0005591760 python3.9[35885]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:27:40 np0005591760 python3.9[36037]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 04:27:42 np0005591760 python3.9[36189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:27:42 np0005591760 python3.9[36341]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:27:42 np0005591760 python3.9[36464]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074062.1969528-661-191964842929018/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:27:44 np0005591760 python3.9[36616]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:27:46 np0005591760 python3.9[36768]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:27:47 np0005591760 python3.9[36921]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:27:50 np0005591760 python3.9[37073]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 04:27:51 np0005591760 python3.9[37226]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 04:27:51 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:27:51 np0005591760 python3.9[37385]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 04:27:52 np0005591760 python3.9[37545]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 04:27:52 np0005591760 python3.9[37698]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 04:27:53 np0005591760 python3.9[37856]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 04:27:54 np0005591760 python3.9[38008]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:27:55 np0005591760 python3.9[38162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:27:56 np0005591760 python3.9[38314]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:27:56 np0005591760 python3.9[38437]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074076.0761893-1018-56330239136726/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:27:57 np0005591760 python3.9[38589]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:27:57 np0005591760 systemd[1]: Starting Load Kernel Modules...
Jan 22 04:27:57 np0005591760 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 22 04:27:57 np0005591760 systemd-modules-load[38593]: Inserted module 'br_netfilter'
Jan 22 04:27:57 np0005591760 kernel: Bridge firewalling registered
Jan 22 04:27:57 np0005591760 systemd[1]: Finished Load Kernel Modules.
Jan 22 04:27:58 np0005591760 python3.9[38748]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:27:58 np0005591760 python3.9[38871]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074077.8584712-1087-202266036420636/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:27:59 np0005591760 python3.9[39023]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:28:03 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:28:03 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:28:03 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:28:03 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:28:03 np0005591760 systemd[1]: Reloading.
Jan 22 04:28:03 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:28:04 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:28:06 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:28:06 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:28:06 np0005591760 systemd[1]: man-db-cache-update.service: Consumed 3.270s CPU time.
Jan 22 04:28:06 np0005591760 systemd[1]: run-rbded47a4099c46ae93abc281ede8800d.service: Deactivated successfully.
Jan 22 04:28:06 np0005591760 python3.9[42698]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:28:07 np0005591760 python3.9[42890]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 04:28:07 np0005591760 python3.9[43040]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:28:08 np0005591760 python3.9[43192]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:28:08 np0005591760 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 04:28:08 np0005591760 systemd[1]: Starting Authorization Manager...
Jan 22 04:28:08 np0005591760 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 04:28:08 np0005591760 polkitd[43409]: Started polkitd version 0.117
Jan 22 04:28:08 np0005591760 systemd[1]: Started Authorization Manager.
Jan 22 04:28:09 np0005591760 python3.9[43575]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:28:09 np0005591760 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 04:28:09 np0005591760 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 04:28:09 np0005591760 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 04:28:09 np0005591760 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 04:28:09 np0005591760 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 04:28:10 np0005591760 python3.9[43737]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 04:28:12 np0005591760 python3.9[43889]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:28:12 np0005591760 systemd[1]: Reloading.
Jan 22 04:28:12 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:28:13 np0005591760 python3.9[44078]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:28:13 np0005591760 systemd[1]: Reloading.
Jan 22 04:28:13 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:28:14 np0005591760 python3.9[44267]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:28:14 np0005591760 python3.9[44420]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:28:14 np0005591760 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 22 04:28:15 np0005591760 python3.9[44573]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:28:16 np0005591760 python3.9[44735]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:28:17 np0005591760 python3.9[44888]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:28:17 np0005591760 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 04:28:17 np0005591760 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 04:28:17 np0005591760 systemd[1]: Stopping Apply Kernel Variables...
Jan 22 04:28:17 np0005591760 systemd[1]: Starting Apply Kernel Variables...
Jan 22 04:28:17 np0005591760 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 04:28:17 np0005591760 systemd[1]: Finished Apply Kernel Variables.
Jan 22 04:28:17 np0005591760 systemd[1]: session-9.scope: Deactivated successfully.
Jan 22 04:28:17 np0005591760 systemd[1]: session-9.scope: Consumed 1min 40.390s CPU time.
Jan 22 04:28:17 np0005591760 systemd-logind[747]: Session 9 logged out. Waiting for processes to exit.
Jan 22 04:28:17 np0005591760 systemd-logind[747]: Removed session 9.
Jan 22 04:28:22 np0005591760 systemd-logind[747]: New session 10 of user zuul.
Jan 22 04:28:22 np0005591760 systemd[1]: Started Session 10 of User zuul.
Jan 22 04:28:23 np0005591760 python3.9[45071]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:28:24 np0005591760 python3.9[45227]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 04:28:25 np0005591760 python3.9[45380]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 04:28:25 np0005591760 python3.9[45538]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 04:28:26 np0005591760 python3.9[45698]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:28:27 np0005591760 python3.9[45782]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 04:28:33 np0005591760 python3.9[45947]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:28:41 np0005591760 kernel: SELinux:  Converting 2736 SID table entries...
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:28:41 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:28:41 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 22 04:28:41 np0005591760 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 22 04:28:42 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:28:42 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:28:42 np0005591760 systemd[1]: Reloading.
Jan 22 04:28:42 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:28:42 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:28:42 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:28:43 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:28:43 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:28:43 np0005591760 systemd[1]: run-rc83f6aec04a44498be4675e43185902b.service: Deactivated successfully.
Jan 22 04:28:45 np0005591760 python3.9[47045]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:28:45 np0005591760 systemd[1]: Reloading.
Jan 22 04:28:45 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:28:45 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:28:45 np0005591760 systemd[1]: Starting Open vSwitch Database Unit...
Jan 22 04:28:45 np0005591760 chown[47086]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 22 04:28:45 np0005591760 ovs-ctl[47091]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 22 04:28:45 np0005591760 ovs-ctl[47091]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 22 04:28:45 np0005591760 ovs-ctl[47091]: Starting ovsdb-server [  OK  ]
Jan 22 04:28:45 np0005591760 ovs-vsctl[47140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 22 04:28:45 np0005591760 ovs-vsctl[47160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e200ec57-2c57-4374-93b1-e04a1348b8ea\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 22 04:28:45 np0005591760 ovs-ctl[47091]: Configuring Open vSwitch system IDs [  OK  ]
Jan 22 04:28:45 np0005591760 ovs-ctl[47091]: Enabling remote OVSDB managers [  OK  ]
Jan 22 04:28:45 np0005591760 systemd[1]: Started Open vSwitch Database Unit.
Jan 22 04:28:45 np0005591760 ovs-vsctl[47166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 22 04:28:45 np0005591760 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 22 04:28:45 np0005591760 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 22 04:28:45 np0005591760 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 22 04:28:45 np0005591760 kernel: openvswitch: Open vSwitch switching datapath
Jan 22 04:28:45 np0005591760 ovs-ctl[47211]: Inserting openvswitch module [  OK  ]
Jan 22 04:28:45 np0005591760 ovs-ctl[47180]: Starting ovs-vswitchd [  OK  ]
Jan 22 04:28:45 np0005591760 ovs-ctl[47180]: Enabling remote OVSDB managers [  OK  ]
Jan 22 04:28:45 np0005591760 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 22 04:28:45 np0005591760 ovs-vsctl[47229]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 22 04:28:45 np0005591760 systemd[1]: Starting Open vSwitch...
Jan 22 04:28:45 np0005591760 systemd[1]: Finished Open vSwitch.
Jan 22 04:28:46 np0005591760 python3.9[47380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:28:47 np0005591760 python3.9[47532]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 04:28:48 np0005591760 kernel: SELinux:  Converting 2750 SID table entries...
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:28:48 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:28:48 np0005591760 python3.9[47687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:28:49 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 22 04:28:49 np0005591760 python3.9[47845]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:28:51 np0005591760 python3.9[47998]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:28:52 np0005591760 python3.9[48285]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 04:28:53 np0005591760 python3.9[48435]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:28:53 np0005591760 python3.9[48589]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:28:56 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:28:56 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:28:56 np0005591760 systemd[1]: Reloading.
Jan 22 04:28:57 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:28:57 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:28:57 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:28:57 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:28:57 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:28:57 np0005591760 systemd[1]: run-r8e91947b68c74e178c2b0cb1c86d7fef.service: Deactivated successfully.
Jan 22 04:28:58 np0005591760 python3.9[48907]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:28:58 np0005591760 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 04:28:58 np0005591760 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 04:28:58 np0005591760 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 04:28:58 np0005591760 systemd[1]: Stopping Network Manager...
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0316] caught SIGTERM, shutting down normally.
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0326] dhcp4 (eth0): canceled DHCP transaction
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0327] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0327] dhcp4 (eth0): state changed no lease
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0328] dhcp6 (eth0): canceled DHCP transaction
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0328] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0328] dhcp6 (eth0): state changed no lease
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0329] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 04:28:58 np0005591760 NetworkManager[7272]: <info>  [1769074138.0356] exiting (success)
Jan 22 04:28:58 np0005591760 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 04:28:58 np0005591760 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 04:28:58 np0005591760 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 04:28:58 np0005591760 systemd[1]: Stopped Network Manager.
Jan 22 04:28:58 np0005591760 systemd[1]: Starting Network Manager...
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.0824] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:236983fe-2283-446d-b460-fa27fee48ad8)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.0825] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.0868] manager[0x562617172000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 04:28:58 np0005591760 systemd[1]: Starting Hostname Service...
Jan 22 04:28:58 np0005591760 systemd[1]: Started Hostname Service.
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1439] hostname: hostname: using hostnamed
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1440] hostname: static hostname changed from (none) to "compute-0"
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1442] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1445] manager[0x562617172000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1445] manager[0x562617172000]: rfkill: WWAN hardware radio set enabled
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1460] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1467] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1467] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1468] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1469] manager: Networking is enabled by state file
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1470] settings: Loaded settings plugin: keyfile (internal)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1472] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1494] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1500] dhcp: init: Using DHCP client 'internal'
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1502] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1505] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1509] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1515] device (lo): Activation: starting connection 'lo' (05d2baa5-0b49-41e6-a720-75a6ae73dfbc)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1520] device (eth0): carrier: link connected
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1523] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1526] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1526] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1531] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1536] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1540] device (eth1): carrier: link connected
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1543] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1547] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (4e09e16e-c843-5077-b7e9-8e1d2a25bbdb) (indicated)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1548] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1551] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1556] device (eth1): Activation: starting connection 'ci-private-network' (4e09e16e-c843-5077-b7e9-8e1d2a25bbdb)
Jan 22 04:28:58 np0005591760 systemd[1]: Started Network Manager.
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1560] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1566] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1568] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1569] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1570] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1572] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1574] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1575] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1577] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1582] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1585] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1586] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1592] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1594] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1600] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1613] dhcp4 (eth0): state changed new lease, address=192.168.26.184
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1618] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1643] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1644] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1645] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1648] device (lo): Activation: successful, device activated.
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1653] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1655] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 22 04:28:58 np0005591760 NetworkManager[48920]: <info>  [1769074138.1656] device (eth1): Activation: successful, device activated.
Jan 22 04:28:58 np0005591760 systemd[1]: Starting Network Manager Wait Online...
Jan 22 04:28:58 np0005591760 python3.9[49116]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1728] dhcp6 (eth0): state changed new lease, address=2001:db8::4b
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1738] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1763] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1764] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1767] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1769] device (eth0): Activation: successful, device activated.
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1773] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 04:28:59 np0005591760 NetworkManager[48920]: <info>  [1769074139.1788] manager: startup complete
Jan 22 04:28:59 np0005591760 systemd[1]: Finished Network Manager Wait Online.
Jan 22 04:29:08 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:29:08 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:29:08 np0005591760 systemd[1]: Reloading.
Jan 22 04:29:08 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:29:08 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:29:08 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:29:09 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:29:09 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:29:09 np0005591760 systemd[1]: run-r09ef4926ff104f86a46056e0f05cad47.service: Deactivated successfully.
Jan 22 04:29:09 np0005591760 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 04:29:10 np0005591760 python3.9[49596]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:29:10 np0005591760 python3.9[49748]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:11 np0005591760 python3.9[49902]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:11 np0005591760 python3.9[50054]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:12 np0005591760 python3.9[50208]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:12 np0005591760 python3.9[50360]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:13 np0005591760 python3.9[50512]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:29:13 np0005591760 python3.9[50635]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074153.109205-642-153128814330096/.source _original_basename=.pxvt31pe follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:14 np0005591760 python3.9[50787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:15 np0005591760 python3.9[50939]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 22 04:29:15 np0005591760 python3.9[51091]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:17 np0005591760 python3.9[51518]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 22 04:29:18 np0005591760 ansible-async_wrapper.py[51693]: Invoked with j132600236636 300 /home/zuul/.ansible/tmp/ansible-tmp-1769074157.6218896-840-74305517352450/AnsiballZ_edpm_os_net_config.py _
Jan 22 04:29:18 np0005591760 ansible-async_wrapper.py[51696]: Starting module and watcher
Jan 22 04:29:18 np0005591760 ansible-async_wrapper.py[51696]: Start watching 51697 (300)
Jan 22 04:29:18 np0005591760 ansible-async_wrapper.py[51697]: Start module (51697)
Jan 22 04:29:18 np0005591760 ansible-async_wrapper.py[51693]: Return async_wrapper task started.
Jan 22 04:29:18 np0005591760 python3.9[51698]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 22 04:29:18 np0005591760 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 22 04:29:18 np0005591760 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 22 04:29:18 np0005591760 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 22 04:29:18 np0005591760 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 22 04:29:18 np0005591760 kernel: cfg80211: failed to load regulatory.db
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.7554] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.7572] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8014] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8015] audit: op="connection-add" uuid="ab1f0b5b-dcd6-4490-a4c0-430c54d97fd1" name="br-ex-br" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8029] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8029] audit: op="connection-add" uuid="6317dbdc-4a7e-4ba4-9f02-d75f472bc620" name="br-ex-port" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8040] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8040] audit: op="connection-add" uuid="4fc9454d-8e9d-49f9-a276-5004d2eac282" name="eth1-port" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8050] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8051] audit: op="connection-add" uuid="58b6585e-13d0-420a-b8e3-2b3616a564c9" name="vlan20-port" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8061] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8062] audit: op="connection-add" uuid="02d1cb8b-bce3-4add-9268-413db842dba2" name="vlan21-port" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8074] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8075] audit: op="connection-add" uuid="ab05e7e6-24f2-4cd7-afab-b28e6b076987" name="vlan22-port" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8086] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8087] audit: op="connection-add" uuid="51e4c118-460c-47b6-825e-2ed20217f09b" name="vlan23-port" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8104] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.routes,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8117] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8118] audit: op="connection-add" uuid="180fb88f-e85a-4052-b7fc-b629cccb62f3" name="br-ex-if" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8144] audit: op="connection-update" uuid="4e09e16e-c843-5077-b7e9-8e1d2a25bbdb" name="ci-private-network" args="ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.never-default,ovs-external-ids.data,connection.controller,connection.master,connection.slave-type,connection.port-type,connection.timestamp,ovs-interface.type" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8157] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8159] audit: op="connection-add" uuid="9b2cf512-642b-46a2-ba1e-67d664f9fd18" name="vlan20-if" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8173] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8174] audit: op="connection-add" uuid="a4850440-89b4-42d8-a0cc-0058c26ab181" name="vlan21-if" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8187] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8188] audit: op="connection-add" uuid="c2c42ff4-e6ba-4678-a3a9-e2027259c355" name="vlan22-if" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8203] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8204] audit: op="connection-add" uuid="427def8d-3f41-4c1b-a6b0-d34e5103b7fc" name="vlan23-if" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8214] audit: op="connection-delete" uuid="2bec87fb-f2ee-3ca7-abd5-18947069d89e" name="Wired connection 1" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8224] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8227] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8232] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8236] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (ab1f0b5b-dcd6-4490-a4c0-430c54d97fd1)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8237] audit: op="connection-activate" uuid="ab1f0b5b-dcd6-4490-a4c0-430c54d97fd1" name="br-ex-br" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8243] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8244] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8249] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8252] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6317dbdc-4a7e-4ba4-9f02-d75f472bc620)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8254] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8254] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8258] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8262] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (4fc9454d-8e9d-49f9-a276-5004d2eac282)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8264] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8264] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8268] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8271] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (58b6585e-13d0-420a-b8e3-2b3616a564c9)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8272] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8273] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8277] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8280] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (02d1cb8b-bce3-4add-9268-413db842dba2)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8281] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8282] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8285] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8289] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ab05e7e6-24f2-4cd7-afab-b28e6b076987)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8290] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8291] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8294] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8298] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (51e4c118-460c-47b6-825e-2ed20217f09b)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8298] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8300] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8301] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8305] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8306] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8309] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8312] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (180fb88f-e85a-4052-b7fc-b629cccb62f3)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8312] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8315] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8316] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8317] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8318] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8326] device (eth1): disconnecting for new activation request.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8326] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8328] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8329] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8330] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8332] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8333] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8335] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8338] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (9b2cf512-642b-46a2-ba1e-67d664f9fd18)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8339] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8341] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8343] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8343] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8346] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8346] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8349] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8352] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (a4850440-89b4-42d8-a0cc-0058c26ab181)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8353] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8355] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8356] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8357] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8359] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8359] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8362] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8365] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c2c42ff4-e6ba-4678-a3a9-e2027259c355)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8365] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8367] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8369] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8370] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8372] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <warn>  [1769074159.8373] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8375] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8378] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (427def8d-3f41-4c1b-a6b0-d34e5103b7fc)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8379] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8380] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8382] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8382] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8383] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8393] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8395] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8398] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8399] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8409] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8412] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8414] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8416] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8418] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 kernel: ovs-system: entered promiscuous mode
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8430] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8432] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8435] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8436] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 kernel: Timeout policy base is empty
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8446] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8449] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 systemd-udevd[51705]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8451] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8480] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8484] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8486] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8488] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8489] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8492] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8494] dhcp4 (eth0): canceled DHCP transaction
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8495] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8495] dhcp4 (eth0): state changed no lease
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8495] dhcp6 (eth0): canceled DHCP transaction
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8495] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8495] dhcp6 (eth0): state changed no lease
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8498] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8508] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8515] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51699 uid=0 result="fail" reason="Device is not activated"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8518] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8524] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8530] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8534] dhcp4 (eth0): state changed new lease, address=192.168.26.184
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8537] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8575] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8642] device (eth1): Activation: starting connection 'ci-private-network' (4e09e16e-c843-5077-b7e9-8e1d2a25bbdb)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8645] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8651] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8654] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8659] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8662] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8669] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8671] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8673] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8674] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8676] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8677] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8682] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8688] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8690] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8695] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8700] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8703] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8708] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8712] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8718] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8723] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8728] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8732] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8737] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 kernel: br-ex: entered promiscuous mode
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8741] device (eth1): state change: ip-config -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8743] device (eth1)[Open vSwitch Port]: detaching ovs interface eth1
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8744] device (eth1): released from controller device eth1
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8751] device (eth1): disconnecting for new activation request.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8752] audit: op="connection-activate" uuid="4e09e16e-c843-5077-b7e9-8e1d2a25bbdb" name="ci-private-network" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8763] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8772] device (eth1): Activation: starting connection 'ci-private-network' (4e09e16e-c843-5077-b7e9-8e1d2a25bbdb)
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8775] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8777] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8780] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8795] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8832] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8833] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8850] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8860] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 kernel: vlan22: entered promiscuous mode
Jan 22 04:29:19 np0005591760 systemd-udevd[51703]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:29:19 np0005591760 kernel: vlan21: entered promiscuous mode
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8936] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8938] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8939] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8942] device (eth1): Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8945] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8949] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8975] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.8982] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9011] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9019] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9025] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 kernel: vlan20: entered promiscuous mode
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9030] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9053] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9074] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9078] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9085] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 systemd-udevd[51704]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:29:19 np0005591760 kernel: vlan23: entered promiscuous mode
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9123] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9143] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9169] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9174] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9182] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9229] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9243] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9257] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9260] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 04:29:19 np0005591760 NetworkManager[48920]: <info>  [1769074159.9269] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.0356] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.1547] checkpoint[0x562617148950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.1548] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.2853] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.2864] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.4786] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.5951] checkpoint[0x562617148a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.5955] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.8387] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 NetworkManager[48920]: <info>  [1769074161.8396] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51699 uid=0 result="success"
Jan 22 04:29:21 np0005591760 python3.9[52059]: ansible-ansible.legacy.async_status Invoked with jid=j132600236636.51693 mode=status _async_dir=/root/.ansible_async
Jan 22 04:29:22 np0005591760 NetworkManager[48920]: <info>  [1769074162.0112] audit: op="networking-control" arg="global-dns-configuration" pid=51699 uid=0 result="success"
Jan 22 04:29:22 np0005591760 NetworkManager[48920]: <info>  [1769074162.0129] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Jan 22 04:29:22 np0005591760 NetworkManager[48920]: <info>  [1769074162.0134] audit: op="networking-control" arg="global-dns-configuration" pid=51699 uid=0 result="success"
Jan 22 04:29:22 np0005591760 NetworkManager[48920]: <info>  [1769074162.0153] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51699 uid=0 result="success"
Jan 22 04:29:22 np0005591760 NetworkManager[48920]: <info>  [1769074162.1324] checkpoint[0x562617148af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Jan 22 04:29:22 np0005591760 NetworkManager[48920]: <info>  [1769074162.1327] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51699 uid=0 result="success"
Jan 22 04:29:22 np0005591760 ansible-async_wrapper.py[51697]: Module complete (51697)
Jan 22 04:29:23 np0005591760 ansible-async_wrapper.py[51696]: Done in kid B.
Jan 22 04:29:25 np0005591760 python3.9[52163]: ansible-ansible.legacy.async_status Invoked with jid=j132600236636.51693 mode=status _async_dir=/root/.ansible_async
Jan 22 04:29:25 np0005591760 python3.9[52263]: ansible-ansible.legacy.async_status Invoked with jid=j132600236636.51693 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 04:29:26 np0005591760 python3.9[52415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:29:26 np0005591760 python3.9[52538]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074165.8359048-921-161830094021968/.source.returncode _original_basename=.20xxrfyn follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:27 np0005591760 python3.9[52690]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:29:27 np0005591760 python3.9[52813]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074166.8073788-969-112044208432747/.source.cfg _original_basename=.p484a5xl follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:28 np0005591760 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 04:29:28 np0005591760 python3.9[52965]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:29:28 np0005591760 systemd[1]: Reloading Network Manager...
Jan 22 04:29:28 np0005591760 NetworkManager[48920]: <info>  [1769074168.2078] audit: op="reload" arg="0" pid=52971 uid=0 result="success"
Jan 22 04:29:28 np0005591760 NetworkManager[48920]: <info>  [1769074168.2083] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 22 04:29:28 np0005591760 NetworkManager[48920]: <info>  [1769074168.2084] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 04:29:28 np0005591760 systemd[1]: Reloaded Network Manager.
Jan 22 04:29:28 np0005591760 systemd[1]: session-10.scope: Deactivated successfully.
Jan 22 04:29:28 np0005591760 systemd[1]: session-10.scope: Consumed 35.838s CPU time.
Jan 22 04:29:28 np0005591760 systemd-logind[747]: Session 10 logged out. Waiting for processes to exit.
Jan 22 04:29:28 np0005591760 systemd-logind[747]: Removed session 10.
Jan 22 04:29:33 np0005591760 systemd-logind[747]: New session 11 of user zuul.
Jan 22 04:29:33 np0005591760 systemd[1]: Started Session 11 of User zuul.
Jan 22 04:29:34 np0005591760 python3.9[53155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:29:34 np0005591760 python3.9[53309]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:29:35 np0005591760 python3.9[53503]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:29:36 np0005591760 systemd[1]: session-11.scope: Deactivated successfully.
Jan 22 04:29:36 np0005591760 systemd[1]: session-11.scope: Consumed 1.630s CPU time.
Jan 22 04:29:36 np0005591760 systemd-logind[747]: Session 11 logged out. Waiting for processes to exit.
Jan 22 04:29:36 np0005591760 systemd-logind[747]: Removed session 11.
Jan 22 04:29:38 np0005591760 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 04:29:41 np0005591760 systemd-logind[747]: New session 12 of user zuul.
Jan 22 04:29:41 np0005591760 systemd[1]: Started Session 12 of User zuul.
Jan 22 04:29:42 np0005591760 python3.9[53685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:29:42 np0005591760 python3.9[53839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:29:43 np0005591760 python3.9[53995]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:29:44 np0005591760 python3.9[54079]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:29:45 np0005591760 python3.9[54233]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:29:46 np0005591760 python3.9[54428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:47 np0005591760 python3.9[54580]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:29:47 np0005591760 systemd[1]: var-lib-containers-storage-overlay-compat2168017383-merged.mount: Deactivated successfully.
Jan 22 04:29:47 np0005591760 podman[54581]: 2026-01-22 09:29:47.146003297 +0000 UTC m=+0.029168913 system refresh
Jan 22 04:29:47 np0005591760 python3.9[54741]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:29:48 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:29:48 np0005591760 python3.9[54865]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074187.3051035-192-48493983147868/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c0d5c89fc6667f9393a5d1ac2e39a87d3c06b0a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:29:48 np0005591760 python3.9[55017]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:29:49 np0005591760 python3.9[55140]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074188.384774-237-232436303889045/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:29:49 np0005591760 python3.9[55292]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:29:50 np0005591760 python3.9[55444]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:29:50 np0005591760 python3.9[55596]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:29:51 np0005591760 python3.9[55749]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:29:51 np0005591760 python3.9[55901]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:29:53 np0005591760 python3.9[56054]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:29:54 np0005591760 python3.9[56208]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:29:54 np0005591760 python3.9[56360]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:29:55 np0005591760 python3.9[56512]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:29:56 np0005591760 python3.9[56665]: ansible-service_facts Invoked
Jan 22 04:29:56 np0005591760 network[56682]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:29:56 np0005591760 network[56683]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:29:56 np0005591760 network[56684]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:29:59 np0005591760 python3.9[57136]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:30:01 np0005591760 python3.9[57289]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 04:30:02 np0005591760 python3.9[57441]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:03 np0005591760 python3.9[57566]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074202.3936164-669-155889748519509/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:03 np0005591760 python3.9[57720]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:03 np0005591760 python3.9[57845]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074203.2785108-714-142421261952958/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:05 np0005591760 python3.9[57999]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:06 np0005591760 python3.9[58153]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:30:07 np0005591760 python3.9[58237]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:30:08 np0005591760 python3.9[58391]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:30:08 np0005591760 python3.9[58475]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:30:09 np0005591760 chronyd[750]: chronyd exiting
Jan 22 04:30:09 np0005591760 systemd[1]: Stopping NTP client/server...
Jan 22 04:30:09 np0005591760 systemd[1]: chronyd.service: Deactivated successfully.
Jan 22 04:30:09 np0005591760 systemd[1]: Stopped NTP client/server.
Jan 22 04:30:09 np0005591760 systemd[1]: Starting NTP client/server...
Jan 22 04:30:09 np0005591760 chronyd[58483]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 04:30:09 np0005591760 chronyd[58483]: Frequency -9.746 +/- 0.373 ppm read from /var/lib/chrony/drift
Jan 22 04:30:09 np0005591760 chronyd[58483]: Loaded seccomp filter (level 2)
Jan 22 04:30:09 np0005591760 systemd[1]: Started NTP client/server.
Jan 22 04:30:09 np0005591760 systemd[1]: session-12.scope: Deactivated successfully.
Jan 22 04:30:09 np0005591760 systemd[1]: session-12.scope: Consumed 17.950s CPU time.
Jan 22 04:30:09 np0005591760 systemd-logind[747]: Session 12 logged out. Waiting for processes to exit.
Jan 22 04:30:09 np0005591760 systemd-logind[747]: Removed session 12.
Jan 22 04:30:14 np0005591760 systemd-logind[747]: New session 13 of user zuul.
Jan 22 04:30:14 np0005591760 systemd[1]: Started Session 13 of User zuul.
Jan 22 04:30:15 np0005591760 python3.9[58664]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:15 np0005591760 python3.9[58816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:16 np0005591760 python3.9[58939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074215.4164925-57-247651077177995/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:16 np0005591760 systemd[1]: session-13.scope: Deactivated successfully.
Jan 22 04:30:16 np0005591760 systemd[1]: session-13.scope: Consumed 1.173s CPU time.
Jan 22 04:30:16 np0005591760 systemd-logind[747]: Session 13 logged out. Waiting for processes to exit.
Jan 22 04:30:16 np0005591760 systemd-logind[747]: Removed session 13.
Jan 22 04:30:21 np0005591760 systemd-logind[747]: New session 14 of user zuul.
Jan 22 04:30:21 np0005591760 systemd[1]: Started Session 14 of User zuul.
Jan 22 04:30:22 np0005591760 python3.9[59117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:30:23 np0005591760 python3.9[59273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:23 np0005591760 python3.9[59448]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:24 np0005591760 python3.9[59571]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769074223.4161496-78-100994284448912/.source.json _original_basename=.kf3myu8x follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:25 np0005591760 python3.9[59723]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:25 np0005591760 python3.9[59846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074224.7438676-147-117161368101773/.source _original_basename=.nrruobf3 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:25 np0005591760 python3.9[59998]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:30:26 np0005591760 python3.9[60150]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:26 np0005591760 python3.9[60273]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074226.0834012-219-253864595107693/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:30:27 np0005591760 python3.9[60425]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:27 np0005591760 python3.9[60548]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074226.8913643-219-169961405404910/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:30:28 np0005591760 python3.9[60700]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:28 np0005591760 python3.9[60852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:28 np0005591760 python3.9[60975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074228.162262-330-69586394302807/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:29 np0005591760 python3.9[61127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:29 np0005591760 python3.9[61250]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074228.9961169-375-173779022161546/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:30 np0005591760 python3.9[61402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:30:30 np0005591760 systemd[1]: Reloading.
Jan 22 04:30:30 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:30:30 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:30:30 np0005591760 systemd[1]: Reloading.
Jan 22 04:30:30 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:30:30 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:30:30 np0005591760 systemd[1]: Starting EDPM Container Shutdown...
Jan 22 04:30:30 np0005591760 systemd[1]: Finished EDPM Container Shutdown.
Jan 22 04:30:31 np0005591760 python3.9[61630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:31 np0005591760 python3.9[61753]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074230.972609-444-208884199099345/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:32 np0005591760 python3.9[61905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:32 np0005591760 python3.9[62028]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074231.8358262-489-48373479745436/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:33 np0005591760 python3.9[62180]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:30:33 np0005591760 systemd[1]: Reloading.
Jan 22 04:30:33 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:30:33 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:30:33 np0005591760 systemd[1]: Reloading.
Jan 22 04:30:33 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:30:33 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:30:33 np0005591760 systemd[1]: Starting Create netns directory...
Jan 22 04:30:33 np0005591760 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 04:30:33 np0005591760 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 04:30:33 np0005591760 systemd[1]: Finished Create netns directory.
Jan 22 04:30:34 np0005591760 python3.9[62406]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:30:34 np0005591760 network[62423]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:30:34 np0005591760 network[62424]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:30:34 np0005591760 network[62425]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:30:36 np0005591760 python3.9[62687]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:30:36 np0005591760 systemd[1]: Reloading.
Jan 22 04:30:36 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:30:36 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:30:36 np0005591760 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 22 04:30:36 np0005591760 iptables.init[62727]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 22 04:30:36 np0005591760 iptables.init[62727]: iptables: Flushing firewall rules: [  OK  ]
Jan 22 04:30:36 np0005591760 systemd[1]: iptables.service: Deactivated successfully.
Jan 22 04:30:36 np0005591760 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 22 04:30:37 np0005591760 python3.9[62923]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:30:37 np0005591760 python3.9[63077]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:30:37 np0005591760 systemd[1]: Reloading.
Jan 22 04:30:38 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:30:38 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:30:38 np0005591760 systemd[1]: Starting Netfilter Tables...
Jan 22 04:30:38 np0005591760 systemd[1]: Finished Netfilter Tables.
Jan 22 04:30:38 np0005591760 python3.9[63268]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:30:39 np0005591760 python3.9[63421]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:39 np0005591760 python3.9[63546]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074239.2169492-696-52604118905282/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:40 np0005591760 python3.9[63699]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:30:40 np0005591760 systemd[1]: Reloading OpenSSH server daemon...
Jan 22 04:30:40 np0005591760 systemd[1]: Reloaded OpenSSH server daemon.
Jan 22 04:30:40 np0005591760 python3.9[63855]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:41 np0005591760 python3.9[64007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:41 np0005591760 python3.9[64130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074241.1238136-789-190492972907606/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:42 np0005591760 python3.9[64282]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 04:30:42 np0005591760 systemd[1]: Starting Time & Date Service...
Jan 22 04:30:42 np0005591760 systemd[1]: Started Time & Date Service.
Jan 22 04:30:43 np0005591760 python3.9[64438]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:43 np0005591760 python3.9[64590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:43 np0005591760 python3.9[64713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074243.2838259-894-130767983130913/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:44 np0005591760 python3.9[64865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:44 np0005591760 python3.9[64988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074244.110394-939-131505856243579/.source.yaml _original_basename=.6g417gg_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:45 np0005591760 python3.9[65140]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:45 np0005591760 python3.9[65263]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074244.9465337-984-47769898987835/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:46 np0005591760 python3.9[65415]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:30:46 np0005591760 python3.9[65568]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:30:47 np0005591760 python3[65721]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 04:30:47 np0005591760 python3.9[65873]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:47 np0005591760 python3.9[65996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074247.29911-1101-47535300039051/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:48 np0005591760 python3.9[66148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:48 np0005591760 python3.9[66271]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074248.1392837-1146-110415324331983/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:49 np0005591760 python3.9[66423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:49 np0005591760 python3.9[66546]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074248.9967136-1191-279547676768838/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:50 np0005591760 python3.9[66698]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:50 np0005591760 python3.9[66821]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074249.8248868-1236-179099944860910/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:51 np0005591760 python3.9[66973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:30:51 np0005591760 python3.9[67096]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074250.677764-1281-116396619106063/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:51 np0005591760 python3.9[67248]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:52 np0005591760 python3.9[67400]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:30:52 np0005591760 python3.9[67559]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:53 np0005591760 python3.9[67712]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:53 np0005591760 python3.9[67864]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:30:54 np0005591760 python3.9[68016]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 04:30:55 np0005591760 python3.9[68169]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 04:30:55 np0005591760 systemd[1]: session-14.scope: Deactivated successfully.
Jan 22 04:30:55 np0005591760 systemd[1]: session-14.scope: Consumed 23.951s CPU time.
Jan 22 04:30:55 np0005591760 systemd-logind[747]: Session 14 logged out. Waiting for processes to exit.
Jan 22 04:30:55 np0005591760 systemd-logind[747]: Removed session 14.
Jan 22 04:31:00 np0005591760 systemd-logind[747]: New session 15 of user zuul.
Jan 22 04:31:00 np0005591760 systemd[1]: Started Session 15 of User zuul.
Jan 22 04:31:00 np0005591760 python3.9[68350]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 04:31:01 np0005591760 python3.9[68502]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:31:01 np0005591760 python3.9[68654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:31:02 np0005591760 python3.9[68806]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1l/4Hnab8cJ+0NgZRyND+668QQ18xCAMiTa4tJfwkacqv2+xu0AP833wzvRbj+BSz/GJYAjYZHtl/LPY/fgAiwZLhNui+6RFQXnMI+TWlUgadcYlxCFSLNXdeIU4VHKdxnYN8cw8WtM+PFaCdmFRk0NGTRLladuZ2Ft6qgEk/ocZCZ1hweLpc0NBPMupsV5ABFtNEZPBg5lEqxBdbFOY3MxlYJEKWIsWCyxu9jzoxc8ct4ejcM8FVx9pujC2XCWVumSYrXkp9LnbeYCOlxnalYYTgZWNh3ilMYw3g85DVUyF1ZECfbN4/uuu9emfUiC8EmIRofJTX7/IPDpqM0CgSFHt6gq45OgfrZ+YHcpPg8Bq5JWL3rpkIoZDiidmCCGrtku8huN9VGYcahOdJVixsNrfIS2jx9k86e19gNzUSKc3qxM6HCUrH0yEbXwcOcG6b1EcBllpJsHB3uXZNar6PeI2C+BkUQH/0520RqM7Zb0ZEg4+6S6i+Z11Ddhkn+Sk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZOEP9uQiV1zH3a3aHqfWGEuJqzUo4rClu3BLMlWitr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNySjQgocwwOdUR7+1+vff+WJ7HHi2x7SZejx49o87M82KSvvvJ1bXTTeQ2yV4jf9DSKuJ6HcIHDr6bnAXEDEj8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLkHp1/Qvor0RkXO+PvZvnJssDpVN93zM11quNN8iQ4KKQf8UHuKy+z84HXpOkzuxv1FNmR50SFPdR2h52T9/BEP+zzSmYli9cDaisI9zLQpghAnG+lXYjqsiPIXqR2z4IheTXQWRoc0c/9XzYCUMaMD73LVsv2ZTHG2Y7QfvK4MxYDPfGzTPihT0BaumTQQi1aKi5eILvXezyBhIgOrgWXDy73LvUS0A1PnwBTWjez2dmfEl2SozhpeqVRSmWdCZ8dRtXREfB6Mq/AC0SFrdQRYBB1fp6IKFrJhehXq8uN9YGQim7NDv95g1Vbg09hBzVMVRBut+meLFMgQicOFxX4cOH/zmBq2HZZ4NgoXQIttG2MWvRDeeOArcoiR4trg88CvXIKbHm7X3Xz124i1la6Znzd233vMLjW61sfm2BSiRvi2U199hCeHLpCKZDeXEfNKKws4/PCyJpilTrDhy01w/oqI6uKjCvuEpfNoDSqx4gfjAyjJboFWEV2ArMddk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAzNyDe1tBrOdz2+WL/pj9pc2M51PHCPiPpvoZYn4bHE#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBELYxft8jWfz1ywTUaPBtZwChEDFG53eKlkYcIDxgJP7KVnKVHGrkh7LMAVvlpn5gDq4gHPOx2/pvsvKR+u3AfU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDx6FoZ1mQHUkExUKBX3RXUJtaZVmdK+/kJ75+oWOFtIZlx0mZcdVNn/rW0Q++oQhtNRWXFfZrC6xkhCT1INz4AehTVQ2y9DTa6PxylfZKv4SS0yNLP/UkFFMiKtWgxzfnFYniRmVr6pgKNAsIxOlGQHtYY9MzvNCU0rfxVJQV1DM7am+c3mbsqlU0w7R+Tur5zDSLFdysQdDqAk4UqlqkgYagUBOhC/cnkuUNOyj3idOKJhFrz/mnkO3P/KrXcgMPfFtu+yx5rQNDNyoZV1bp+uPgP8kvQGe5ol/cbTEiXlZ5BEgYcKbky8H1ICbcoiG5YcmEMNOm8s88fxvf6dJpdeAmjmraoHZtKson2jeZ7NsYgsjNhwKEElcxzAfhnhK+IfalpZhHQxGypR/IPlQrLlJOrbyAEIyk40nASUHxlJrOXP1lA9dvLaG/3KkIa2sPwaIgdVhzpmyodJds2sMg6cngRljDGY1UBTYGyo8vNNILFoCzMPNDcNCyY9xWYz8M=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDQuz7VE0tTRnQJ96QrHIwmJh8osJY9A2+gmzkUlh54#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM7hnQz957+RtY0Mltzkw+lJRI4x2IlQwAuVKb+t24lorNdYqOmeiT8j8X9huVxPKGZSUxesKQ7YFrI9bxqNRo4=#012 create=True mode=0644 path=/tmp/ansible.ieic8xn2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:03 np0005591760 python3.9[68958]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ieic8xn2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:03 np0005591760 python3.9[69112]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ieic8xn2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:03 np0005591760 systemd[1]: session-15.scope: Deactivated successfully.
Jan 22 04:31:03 np0005591760 systemd[1]: session-15.scope: Consumed 2.325s CPU time.
Jan 22 04:31:03 np0005591760 systemd-logind[747]: Session 15 logged out. Waiting for processes to exit.
Jan 22 04:31:03 np0005591760 systemd-logind[747]: Removed session 15.
Jan 22 04:31:09 np0005591760 systemd-logind[747]: New session 16 of user zuul.
Jan 22 04:31:09 np0005591760 systemd[1]: Started Session 16 of User zuul.
Jan 22 04:31:09 np0005591760 python3.9[69290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:31:10 np0005591760 python3.9[69446]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 04:31:11 np0005591760 python3.9[69600]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:31:12 np0005591760 python3.9[69753]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:12 np0005591760 python3.9[69906]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:31:12 np0005591760 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 04:31:13 np0005591760 python3.9[70062]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:13 np0005591760 python3.9[70217]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:14 np0005591760 systemd[1]: session-16.scope: Deactivated successfully.
Jan 22 04:31:14 np0005591760 systemd[1]: session-16.scope: Consumed 3.058s CPU time.
Jan 22 04:31:14 np0005591760 systemd-logind[747]: Session 16 logged out. Waiting for processes to exit.
Jan 22 04:31:14 np0005591760 systemd-logind[747]: Removed session 16.
Jan 22 04:31:19 np0005591760 systemd-logind[747]: New session 17 of user zuul.
Jan 22 04:31:19 np0005591760 systemd[1]: Started Session 17 of User zuul.
Jan 22 04:31:20 np0005591760 python3.9[70395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:31:20 np0005591760 python3.9[70551]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:31:21 np0005591760 python3.9[70635]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 04:31:22 np0005591760 python3.9[70786]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:23 np0005591760 python3.9[70937]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 04:31:24 np0005591760 python3.9[71087]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:31:24 np0005591760 python3.9[71237]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:31:25 np0005591760 systemd[1]: session-17.scope: Deactivated successfully.
Jan 22 04:31:25 np0005591760 systemd[1]: session-17.scope: Consumed 4.057s CPU time.
Jan 22 04:31:25 np0005591760 systemd-logind[747]: Session 17 logged out. Waiting for processes to exit.
Jan 22 04:31:25 np0005591760 systemd-logind[747]: Removed session 17.
Jan 22 04:31:32 np0005591760 systemd-logind[747]: New session 18 of user zuul.
Jan 22 04:31:32 np0005591760 systemd[1]: Started Session 18 of User zuul.
Jan 22 04:31:36 np0005591760 python3[72003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:31:37 np0005591760 python3[72094]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 04:31:38 np0005591760 python3[72121]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:31:38 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:31:39 np0005591760 python3[72148]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:39 np0005591760 kernel: loop: module loaded
Jan 22 04:31:39 np0005591760 kernel: loop3: detected capacity change from 0 to 41943040
Jan 22 04:31:39 np0005591760 python3[72183]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:39 np0005591760 lvm[72186]: PV /dev/loop3 not used.
Jan 22 04:31:39 np0005591760 lvm[72195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:31:39 np0005591760 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 22 04:31:39 np0005591760 lvm[72197]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 22 04:31:39 np0005591760 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 22 04:31:39 np0005591760 python3[72275]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:31:40 np0005591760 python3[72348]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074299.61813-37337-64404655177004/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:40 np0005591760 python3[72398]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:31:40 np0005591760 systemd[1]: Reloading.
Jan 22 04:31:40 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:31:40 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:31:40 np0005591760 systemd[1]: Starting Ceph OSD losetup...
Jan 22 04:31:40 np0005591760 bash[72439]: /dev/loop3: [64513]:4328461 (/var/lib/ceph-osd-0.img)
Jan 22 04:31:40 np0005591760 systemd[1]: Finished Ceph OSD losetup.
Jan 22 04:31:40 np0005591760 lvm[72440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:31:40 np0005591760 lvm[72440]: VG ceph_vg0 finished
Jan 22 04:31:42 np0005591760 python3[72464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:31:44 np0005591760 python3[72557]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 04:31:46 np0005591760 python3[72616]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 04:31:49 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:31:49 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:31:49 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:31:49 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:31:49 np0005591760 systemd[1]: run-r423ff08b747a4350a7eab24c686473f3.service: Deactivated successfully.
Jan 22 04:31:50 np0005591760 python3[72731]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:31:50 np0005591760 python3[72759]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:31:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:31:51 np0005591760 python3[72816]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:51 np0005591760 python3[72842]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:31:51 np0005591760 python3[72920]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:31:52 np0005591760 python3[72993]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074311.707659-37529-130076583802501/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:52 np0005591760 python3[73095]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:31:52 np0005591760 python3[73168]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074312.5216944-37547-126996037297627/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:31:53 np0005591760 python3[73218]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:31:53 np0005591760 python3[73246]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:31:53 np0005591760 python3[73274]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:31:54 np0005591760 python3[73300]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:31:54 np0005591760 python3[73326]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:31:54 np0005591760 systemd-logind[747]: New session 19 of user ceph-admin.
Jan 22 04:31:54 np0005591760 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 04:31:54 np0005591760 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 04:31:54 np0005591760 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 04:31:54 np0005591760 systemd[1]: Starting User Manager for UID 42477...
Jan 22 04:31:54 np0005591760 systemd[73334]: Queued start job for default target Main User Target.
Jan 22 04:31:54 np0005591760 systemd[73334]: Created slice User Application Slice.
Jan 22 04:31:54 np0005591760 systemd[73334]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 04:31:54 np0005591760 systemd[73334]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 04:31:54 np0005591760 systemd[73334]: Reached target Paths.
Jan 22 04:31:54 np0005591760 systemd[73334]: Reached target Timers.
Jan 22 04:31:54 np0005591760 systemd[73334]: Starting D-Bus User Message Bus Socket...
Jan 22 04:31:54 np0005591760 systemd[73334]: Starting Create User's Volatile Files and Directories...
Jan 22 04:31:54 np0005591760 systemd[73334]: Listening on D-Bus User Message Bus Socket.
Jan 22 04:31:54 np0005591760 systemd[73334]: Finished Create User's Volatile Files and Directories.
Jan 22 04:31:54 np0005591760 systemd[73334]: Reached target Sockets.
Jan 22 04:31:54 np0005591760 systemd[73334]: Reached target Basic System.
Jan 22 04:31:54 np0005591760 systemd[1]: Started User Manager for UID 42477.
Jan 22 04:31:54 np0005591760 systemd[73334]: Reached target Main User Target.
Jan 22 04:31:54 np0005591760 systemd[73334]: Startup finished in 82ms.
Jan 22 04:31:54 np0005591760 systemd[1]: Started Session 19 of User ceph-admin.
Jan 22 04:31:54 np0005591760 systemd[1]: session-19.scope: Deactivated successfully.
Jan 22 04:31:54 np0005591760 systemd-logind[747]: Session 19 logged out. Waiting for processes to exit.
Jan 22 04:31:54 np0005591760 systemd-logind[747]: Removed session 19.
Jan 22 04:31:54 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:31:54 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:31:57 np0005591760 systemd[1]: var-lib-containers-storage-overlay-compat868719389-lower\x2dmapped.mount: Deactivated successfully.
Jan 22 04:32:04 np0005591760 systemd[1]: Stopping User Manager for UID 42477...
Jan 22 04:32:04 np0005591760 systemd[73334]: Activating special unit Exit the Session...
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped target Main User Target.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped target Basic System.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped target Paths.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped target Sockets.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped target Timers.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 04:32:04 np0005591760 systemd[73334]: Closed D-Bus User Message Bus Socket.
Jan 22 04:32:04 np0005591760 systemd[73334]: Stopped Create User's Volatile Files and Directories.
Jan 22 04:32:04 np0005591760 systemd[73334]: Removed slice User Application Slice.
Jan 22 04:32:04 np0005591760 systemd[73334]: Reached target Shutdown.
Jan 22 04:32:04 np0005591760 systemd[73334]: Finished Exit the Session.
Jan 22 04:32:04 np0005591760 systemd[73334]: Reached target Exit the Session.
Jan 22 04:32:04 np0005591760 systemd[1]: user@42477.service: Deactivated successfully.
Jan 22 04:32:04 np0005591760 systemd[1]: Stopped User Manager for UID 42477.
Jan 22 04:32:04 np0005591760 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 22 04:32:04 np0005591760 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 22 04:32:04 np0005591760 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 22 04:32:04 np0005591760 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 22 04:32:04 np0005591760 systemd[1]: Removed slice User Slice of UID 42477.
Jan 22 04:32:14 np0005591760 podman[73424]: 2026-01-22 09:32:14.508712483 +0000 UTC m=+19.652230169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.556624188 +0000 UTC m=+0.029130157 container create 17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9 (image=quay.io/ceph/ceph:v19, name=fervent_allen, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:32:14 np0005591760 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 22 04:32:14 np0005591760 systemd[1]: Started libpod-conmon-17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9.scope.
Jan 22 04:32:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.62890727 +0000 UTC m=+0.101413239 container init 17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9 (image=quay.io/ceph/ceph:v19, name=fervent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.633317063 +0000 UTC m=+0.105823032 container start 17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9 (image=quay.io/ceph/ceph:v19, name=fervent_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.634446202 +0000 UTC m=+0.106952181 container attach 17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9 (image=quay.io/ceph/ceph:v19, name=fervent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.544458874 +0000 UTC m=+0.016964853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:14 np0005591760 fervent_allen[73489]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 22 04:32:14 np0005591760 systemd[1]: libpod-17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9.scope: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.712061866 +0000 UTC m=+0.184567845 container died 17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9 (image=quay.io/ceph/ceph:v19, name=fervent_allen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:32:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fb370cbcbad019a8cea0e10cbde8b2aa6415ce818b77730670ea7a23fd0e537e-merged.mount: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73476]: 2026-01-22 09:32:14.731763467 +0000 UTC m=+0.204269437 container remove 17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9 (image=quay.io/ceph/ceph:v19, name=fervent_allen, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:14 np0005591760 systemd[1]: libpod-conmon-17676430c9990cf8f107fbf348d72884fe6cc6357d2ab7217dadb3030164c1e9.scope: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.774221591 +0000 UTC m=+0.028383580 container create 66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752 (image=quay.io/ceph/ceph:v19, name=nervous_wright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:14 np0005591760 systemd[1]: Started libpod-conmon-66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752.scope.
Jan 22 04:32:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.814089512 +0000 UTC m=+0.068251501 container init 66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752 (image=quay.io/ceph/ceph:v19, name=nervous_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.817602775 +0000 UTC m=+0.071764765 container start 66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752 (image=quay.io/ceph/ceph:v19, name=nervous_wright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.818579266 +0000 UTC m=+0.072741256 container attach 66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752 (image=quay.io/ceph/ceph:v19, name=nervous_wright, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:32:14 np0005591760 systemd[1]: libpod-66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752.scope: Deactivated successfully.
Jan 22 04:32:14 np0005591760 nervous_wright[73517]: 167 167
Jan 22 04:32:14 np0005591760 conmon[73517]: conmon 66cfadf5baf58e2d549c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752.scope/container/memory.events
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.821055786 +0000 UTC m=+0.075217785 container died 66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752 (image=quay.io/ceph/ceph:v19, name=nervous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.834654181 +0000 UTC m=+0.088816170 container remove 66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752 (image=quay.io/ceph/ceph:v19, name=nervous_wright, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:14 np0005591760 podman[73503]: 2026-01-22 09:32:14.763449113 +0000 UTC m=+0.017611122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:14 np0005591760 systemd[1]: libpod-conmon-66cfadf5baf58e2d549c7890089015bbc6fed7c123acacefec634973b0451752.scope: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.8804887 +0000 UTC m=+0.030001041 container create 0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d (image=quay.io/ceph/ceph:v19, name=sad_sinoussi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:14 np0005591760 systemd[1]: Started libpod-conmon-0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d.scope.
Jan 22 04:32:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.917009441 +0000 UTC m=+0.066521781 container init 0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d (image=quay.io/ceph/ceph:v19, name=sad_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.920549715 +0000 UTC m=+0.070062034 container start 0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d (image=quay.io/ceph/ceph:v19, name=sad_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.921800203 +0000 UTC m=+0.071312543 container attach 0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d (image=quay.io/ceph/ceph:v19, name=sad_sinoussi, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:14 np0005591760 sad_sinoussi[73545]: AQCe7nFp66HWNxAA6HOSmOST3TZqDeYYM1Zvqw==
Jan 22 04:32:14 np0005591760 systemd[1]: libpod-0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d.scope: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.939142296 +0000 UTC m=+0.088654616 container died 0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d (image=quay.io/ceph/ceph:v19, name=sad_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.954694025 +0000 UTC m=+0.104206345 container remove 0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d (image=quay.io/ceph/ceph:v19, name=sad_sinoussi, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:14 np0005591760 podman[73530]: 2026-01-22 09:32:14.869842149 +0000 UTC m=+0.019354479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:14 np0005591760 systemd[1]: libpod-conmon-0d03b91e64b6b6bf2a639ea6e15581dac157a3d87797e6da63df7656f671fa9d.scope: Deactivated successfully.
Jan 22 04:32:14 np0005591760 podman[73561]: 2026-01-22 09:32:14.99208212 +0000 UTC m=+0.025067879 container create 41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933 (image=quay.io/ceph/ceph:v19, name=intelligent_kepler, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:15 np0005591760 systemd[1]: Started libpod-conmon-41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933.scope.
Jan 22 04:32:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:15 np0005591760 podman[73561]: 2026-01-22 09:32:15.03290431 +0000 UTC m=+0.065890069 container init 41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933 (image=quay.io/ceph/ceph:v19, name=intelligent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:32:15 np0005591760 podman[73561]: 2026-01-22 09:32:15.037149534 +0000 UTC m=+0.070135282 container start 41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933 (image=quay.io/ceph/ceph:v19, name=intelligent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:15 np0005591760 podman[73561]: 2026-01-22 09:32:15.038180537 +0000 UTC m=+0.071166286 container attach 41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933 (image=quay.io/ceph/ceph:v19, name=intelligent_kepler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:15 np0005591760 intelligent_kepler[73574]: AQCf7nFpXVkmAxAA61Cc3fOsK3g3LJan8PXzyQ==
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73581]: 2026-01-22 09:32:15.078847934 +0000 UTC m=+0.015583679 container died 41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933 (image=quay.io/ceph/ceph:v19, name=intelligent_kepler, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:32:15 np0005591760 podman[73561]: 2026-01-22 09:32:14.981904633 +0000 UTC m=+0.014890402 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:15 np0005591760 podman[73581]: 2026-01-22 09:32:15.09285387 +0000 UTC m=+0.029589604 container remove 41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933 (image=quay.io/ceph/ceph:v19, name=intelligent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-conmon-41b4df6871238f946989ee7b397eae34067c258c8a3b8a766491d637df752933.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.136328189 +0000 UTC m=+0.025949201 container create 7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304 (image=quay.io/ceph/ceph:v19, name=hopeful_taussig, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:15 np0005591760 systemd[1]: Started libpod-conmon-7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304.scope.
Jan 22 04:32:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.174164258 +0000 UTC m=+0.063785291 container init 7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304 (image=quay.io/ceph/ceph:v19, name=hopeful_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.178155804 +0000 UTC m=+0.067776816 container start 7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304 (image=quay.io/ceph/ceph:v19, name=hopeful_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.179163564 +0000 UTC m=+0.068784577 container attach 7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304 (image=quay.io/ceph/ceph:v19, name=hopeful_taussig, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:15 np0005591760 hopeful_taussig[73607]: AQCf7nFpQVWSCxAALmLYUgnSFhW6DY44j51L4Q==
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.196344323 +0000 UTC m=+0.085965336 container died 7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304 (image=quay.io/ceph/ceph:v19, name=hopeful_taussig, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.214438546 +0000 UTC m=+0.104059559 container remove 7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304 (image=quay.io/ceph/ceph:v19, name=hopeful_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:15 np0005591760 podman[73593]: 2026-01-22 09:32:15.126461638 +0000 UTC m=+0.016082671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-conmon-7ec2ae881927bf2f7e1208d92e1cb1b682e5bbf3593cd1b4be92ab1a2dbf0304.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.255431767 +0000 UTC m=+0.026029091 container create e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768 (image=quay.io/ceph/ceph:v19, name=busy_morse, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:32:15 np0005591760 systemd[1]: Started libpod-conmon-e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768.scope.
Jan 22 04:32:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449392fca09127f8bb9239f4445630bae536e4d2628dc2a29a10835f5b1c9bde/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.29673306 +0000 UTC m=+0.067330405 container init e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768 (image=quay.io/ceph/ceph:v19, name=busy_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.300589039 +0000 UTC m=+0.071186363 container start e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768 (image=quay.io/ceph/ceph:v19, name=busy_morse, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.302014817 +0000 UTC m=+0.072612141 container attach e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768 (image=quay.io/ceph/ceph:v19, name=busy_morse, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:32:15 np0005591760 busy_morse[73635]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 22 04:32:15 np0005591760 busy_morse[73635]: setting min_mon_release = quincy
Jan 22 04:32:15 np0005591760 busy_morse[73635]: /usr/bin/monmaptool: set fsid to 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:15 np0005591760 busy_morse[73635]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.322818648 +0000 UTC m=+0.093415971 container died e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768 (image=quay.io/ceph/ceph:v19, name=busy_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.337299758 +0000 UTC m=+0.107897082 container remove e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768 (image=quay.io/ceph/ceph:v19, name=busy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 04:32:15 np0005591760 podman[73623]: 2026-01-22 09:32:15.245265852 +0000 UTC m=+0.015863196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-conmon-e0c954c1348924c0bc6394828fe8da70dde1660a7833974ca7b4e81804967768.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.379589183 +0000 UTC m=+0.026518484 container create 75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9 (image=quay.io/ceph/ceph:v19, name=eloquent_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:32:15 np0005591760 systemd[1]: Started libpod-conmon-75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9.scope.
Jan 22 04:32:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6626490142bff99ac75d8ce87edf8760bdf1b685cdd25ef02a386b8426fdf1eb/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6626490142bff99ac75d8ce87edf8760bdf1b685cdd25ef02a386b8426fdf1eb/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6626490142bff99ac75d8ce87edf8760bdf1b685cdd25ef02a386b8426fdf1eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6626490142bff99ac75d8ce87edf8760bdf1b685cdd25ef02a386b8426fdf1eb/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.424798915 +0000 UTC m=+0.071728216 container init 75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9 (image=quay.io/ceph/ceph:v19, name=eloquent_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.431217274 +0000 UTC m=+0.078146575 container start 75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9 (image=quay.io/ceph/ceph:v19, name=eloquent_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.432381891 +0000 UTC m=+0.079311181 container attach 75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9 (image=quay.io/ceph/ceph:v19, name=eloquent_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.369282083 +0000 UTC m=+0.016211394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.472289476 +0000 UTC m=+0.119218777 container died 75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9 (image=quay.io/ceph/ceph:v19, name=eloquent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:32:15 np0005591760 podman[73652]: 2026-01-22 09:32:15.488018529 +0000 UTC m=+0.134947829 container remove 75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9 (image=quay.io/ceph/ceph:v19, name=eloquent_cerf, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:15 np0005591760 systemd[1]: libpod-conmon-75d6a7c8996341327cd11b8313c1a28cf37467021e5b80c59702afaccc84d4c9.scope: Deactivated successfully.
Jan 22 04:32:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:32:15 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:15 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:15 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:15 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:15 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:15 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:15 np0005591760 systemd[1]: Reached target All Ceph clusters and services.
Jan 22 04:32:15 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:15 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:15 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:16 np0005591760 systemd[1]: Reached target Ceph cluster 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:16 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:16 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:16 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:16 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:16 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:16 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:16 np0005591760 systemd[1]: Created slice Slice /system/ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:16 np0005591760 systemd[1]: Reached target System Time Set.
Jan 22 04:32:16 np0005591760 systemd[1]: Reached target System Time Synchronized.
Jan 22 04:32:16 np0005591760 systemd[1]: Starting Ceph mon.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:32:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:32:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:32:16 np0005591760 podman[73932]: 2026-01-22 09:32:16.676665361 +0000 UTC m=+0.026403148 container create 2bf0d4f60a1160507385df543ea93d301352a5cacd864858a2f68db1251207bc (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4284fb49b095b7d855f72c80096fdd34f10aa2ee0e96245cf4cc41d2e4011119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4284fb49b095b7d855f72c80096fdd34f10aa2ee0e96245cf4cc41d2e4011119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4284fb49b095b7d855f72c80096fdd34f10aa2ee0e96245cf4cc41d2e4011119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4284fb49b095b7d855f72c80096fdd34f10aa2ee0e96245cf4cc41d2e4011119/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 podman[73932]: 2026-01-22 09:32:16.718603194 +0000 UTC m=+0.068340980 container init 2bf0d4f60a1160507385df543ea93d301352a5cacd864858a2f68db1251207bc (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:16 np0005591760 podman[73932]: 2026-01-22 09:32:16.722560304 +0000 UTC m=+0.072298091 container start 2bf0d4f60a1160507385df543ea93d301352a5cacd864858a2f68db1251207bc (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:32:16 np0005591760 bash[73932]: 2bf0d4f60a1160507385df543ea93d301352a5cacd864858a2f68db1251207bc
Jan 22 04:32:16 np0005591760 podman[73932]: 2026-01-22 09:32:16.665223841 +0000 UTC m=+0.014961648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:16 np0005591760 systemd[1]: Started Ceph mon.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: pidfile_write: ignore empty --pid-file
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: load: jerasure load: lrc 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: RocksDB version: 7.9.2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Git sha 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: DB SUMMARY
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: DB Session ID:  5L6421JP3V3CTVB1CNSH
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: CURRENT file:  CURRENT
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                         Options.error_if_exists: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.create_if_missing: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                                     Options.env: 0x55aeffc64c20
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                                Options.info_log: 0x55af00d68d60
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                              Options.statistics: (nil)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                               Options.use_fsync: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                              Options.db_log_dir: 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                                 Options.wal_dir: 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                    Options.write_buffer_manager: 0x55af00d6d900
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.unordered_write: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                               Options.row_cache: None
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                              Options.wal_filter: None
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.two_write_queues: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.wal_compression: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.atomic_flush: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.max_background_jobs: 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.max_background_compactions: -1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.max_subcompactions: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                          Options.max_open_files: -1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Compression algorithms supported:
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kZSTD supported: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kXpressCompression supported: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kZlibCompression supported: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:           Options.merge_operator: 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:        Options.compaction_filter: None
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55af00d68500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55af00d8d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.compression: NoCompression
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.num_levels: 7
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 804a04cf-10ce-4c4c-aa43-09122b4af995
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074336756765, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074336762376, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "5L6421JP3V3CTVB1CNSH", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074336762517, "job": 1, "event": "recovery_finished"}
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55af00d8ee00
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: DB pointer 0x55af00e98000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55af00d8d350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@-1(???) e0 preinit fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:32:16 np0005591760 podman[73949]: 2026-01-22 09:32:16.771755158 +0000 UTC m=+0.028679888 container create 728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db (image=quay.io/ceph/ceph:v19, name=inspiring_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : last_changed 2026-01-22T09:32:15.320230+0000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : created 2026-01-22T09:32:15.320230+0000
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865364,os=Linux}
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).mds e1 new map
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-22T09:32:16:777810+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : fsmap 
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mkfs 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 04:32:16 np0005591760 systemd[1]: Started libpod-conmon-728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db.scope.
Jan 22 04:32:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1cc866f8d279de8236d645efa35f837ffff1673b9ebbc78e728ce04f8fd8b84/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1cc866f8d279de8236d645efa35f837ffff1673b9ebbc78e728ce04f8fd8b84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1cc866f8d279de8236d645efa35f837ffff1673b9ebbc78e728ce04f8fd8b84/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:16 np0005591760 podman[73949]: 2026-01-22 09:32:16.843624659 +0000 UTC m=+0.100549400 container init 728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db (image=quay.io/ceph/ceph:v19, name=inspiring_ride, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:32:16 np0005591760 podman[73949]: 2026-01-22 09:32:16.848107671 +0000 UTC m=+0.105032402 container start 728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db (image=quay.io/ceph/ceph:v19, name=inspiring_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:32:16 np0005591760 podman[73949]: 2026-01-22 09:32:16.849505656 +0000 UTC m=+0.106430386 container attach 728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db (image=quay.io/ceph/ceph:v19, name=inspiring_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:32:16 np0005591760 podman[73949]: 2026-01-22 09:32:16.759829765 +0000 UTC m=+0.016754516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 22 04:32:16 np0005591760 ceph-mon[73948]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064984886' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:  cluster:
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    id:     43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    health: HEALTH_OK
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]: 
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:  services:
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    mon: 1 daemons, quorum compute-0 (age 0.215433s)
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    mgr: no daemons active
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    osd: 0 osds: 0 up, 0 in
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]: 
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:  data:
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    pools:   0 pools, 0 pgs
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    objects: 0 objects, 0 B
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    usage:   0 B used, 0 B / 0 B avail
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]:    pgs:     
Jan 22 04:32:16 np0005591760 inspiring_ride[74001]: 
Jan 22 04:32:17 np0005591760 systemd[1]: libpod-728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db.scope: Deactivated successfully.
Jan 22 04:32:17 np0005591760 podman[73949]: 2026-01-22 09:32:17.004569408 +0000 UTC m=+0.261494140 container died 728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db (image=quay.io/ceph/ceph:v19, name=inspiring_ride, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:17 np0005591760 podman[73949]: 2026-01-22 09:32:17.024388833 +0000 UTC m=+0.281313563 container remove 728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db (image=quay.io/ceph/ceph:v19, name=inspiring_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:17 np0005591760 systemd[1]: libpod-conmon-728d2912f0f0b674b7e4ae1474da9ef52c4a1585c53f5a16fbddc7aa797d55db.scope: Deactivated successfully.
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.069348152 +0000 UTC m=+0.027181775 container create de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae (image=quay.io/ceph/ceph:v19, name=hardcore_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:17 np0005591760 systemd[1]: Started libpod-conmon-de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae.scope.
Jan 22 04:32:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d931b2a0d5457f4abefee538f793f96dd38fe6877212098c58bc1af91900ab95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d931b2a0d5457f4abefee538f793f96dd38fe6877212098c58bc1af91900ab95/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d931b2a0d5457f4abefee538f793f96dd38fe6877212098c58bc1af91900ab95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d931b2a0d5457f4abefee538f793f96dd38fe6877212098c58bc1af91900ab95/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.124845437 +0000 UTC m=+0.082679081 container init de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae (image=quay.io/ceph/ceph:v19, name=hardcore_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.12885181 +0000 UTC m=+0.086685434 container start de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae (image=quay.io/ceph/ceph:v19, name=hardcore_ardinghelli, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.130511889 +0000 UTC m=+0.088345514 container attach de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae (image=quay.io/ceph/ceph:v19, name=hardcore_ardinghelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.0585848 +0000 UTC m=+0.016418434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1946882157' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1946882157' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 04:32:17 np0005591760 hardcore_ardinghelli[74050]: 
Jan 22 04:32:17 np0005591760 hardcore_ardinghelli[74050]: [global]
Jan 22 04:32:17 np0005591760 hardcore_ardinghelli[74050]: #011fsid = 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:17 np0005591760 hardcore_ardinghelli[74050]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 22 04:32:17 np0005591760 systemd[1]: libpod-de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae.scope: Deactivated successfully.
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.2844978 +0000 UTC m=+0.242331424 container died de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae (image=quay.io/ceph/ceph:v19, name=hardcore_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:32:17 np0005591760 podman[74036]: 2026-01-22 09:32:17.305300719 +0000 UTC m=+0.263134343 container remove de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae (image=quay.io/ceph/ceph:v19, name=hardcore_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:17 np0005591760 systemd[1]: libpod-conmon-de44300dd34e462fab139afc34a6b0f0528e423a6bda8900009dd044bec2ffae.scope: Deactivated successfully.
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.348333856 +0000 UTC m=+0.027397462 container create 1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1 (image=quay.io/ceph/ceph:v19, name=happy_poincare, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:17 np0005591760 systemd[1]: Started libpod-conmon-1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1.scope.
Jan 22 04:32:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376ba3e68e47f2c50960b9b1c52972f2e88e8a104ca5855c9e79aeec19a4886b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376ba3e68e47f2c50960b9b1c52972f2e88e8a104ca5855c9e79aeec19a4886b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376ba3e68e47f2c50960b9b1c52972f2e88e8a104ca5855c9e79aeec19a4886b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376ba3e68e47f2c50960b9b1c52972f2e88e8a104ca5855c9e79aeec19a4886b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.3881505 +0000 UTC m=+0.067214096 container init 1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1 (image=quay.io/ceph/ceph:v19, name=happy_poincare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.394029272 +0000 UTC m=+0.073092868 container start 1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1 (image=quay.io/ceph/ceph:v19, name=happy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.395245355 +0000 UTC m=+0.074308961 container attach 1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1 (image=quay.io/ceph/ceph:v19, name=happy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.337480775 +0000 UTC m=+0.016544391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307552437' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:17 np0005591760 systemd[1]: libpod-1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1.scope: Deactivated successfully.
Jan 22 04:32:17 np0005591760 conmon[74099]: conmon 1fb008d7e81298a9b7fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1.scope/container/memory.events
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.548139286 +0000 UTC m=+0.227202883 container died 1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1 (image=quay.io/ceph/ceph:v19, name=happy_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:32:17 np0005591760 podman[74086]: 2026-01-22 09:32:17.565274671 +0000 UTC m=+0.244338267 container remove 1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1 (image=quay.io/ceph/ceph:v19, name=happy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:17 np0005591760 systemd[1]: Stopping Ceph mon.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:32:17 np0005591760 systemd[1]: libpod-conmon-1fb008d7e81298a9b7febee77d1f5a151667da08b2e91f63883b02e39ed4e6a1.scope: Deactivated successfully.
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: mon.compute-0@0(leader) e1 shutdown
Jan 22 04:32:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0[73944]: 2026-01-22T09:32:17.687+0000 7f6831ea8640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 22 04:32:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0[73944]: 2026-01-22T09:32:17.687+0000 7f6831ea8640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 04:32:17 np0005591760 ceph-mon[73948]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 04:32:17 np0005591760 podman[74157]: 2026-01-22 09:32:17.738920656 +0000 UTC m=+0.074532502 container died 2bf0d4f60a1160507385df543ea93d301352a5cacd864858a2f68db1251207bc (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:32:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4284fb49b095b7d855f72c80096fdd34f10aa2ee0e96245cf4cc41d2e4011119-merged.mount: Deactivated successfully.
Jan 22 04:32:17 np0005591760 podman[74157]: 2026-01-22 09:32:17.754885282 +0000 UTC m=+0.090497129 container remove 2bf0d4f60a1160507385df543ea93d301352a5cacd864858a2f68db1251207bc (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 04:32:17 np0005591760 bash[74157]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0
Jan 22 04:32:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 04:32:17 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@mon.compute-0.service: Deactivated successfully.
Jan 22 04:32:17 np0005591760 systemd[1]: Stopped Ceph mon.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:17 np0005591760 systemd[1]: Starting Ceph mon.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:32:17 np0005591760 podman[74238]: 2026-01-22 09:32:17.978801587 +0000 UTC m=+0.027468014 container create 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c45ca39374941b17396feaae86d3c8517e1edb56035279b7f0ed776848a5fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c45ca39374941b17396feaae86d3c8517e1edb56035279b7f0ed776848a5fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c45ca39374941b17396feaae86d3c8517e1edb56035279b7f0ed776848a5fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c45ca39374941b17396feaae86d3c8517e1edb56035279b7f0ed776848a5fce/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 podman[74238]: 2026-01-22 09:32:18.016536637 +0000 UTC m=+0.065203074 container init 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:32:18 np0005591760 podman[74238]: 2026-01-22 09:32:18.021056077 +0000 UTC m=+0.069722504 container start 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:18 np0005591760 bash[74238]: 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad
Jan 22 04:32:18 np0005591760 podman[74238]: 2026-01-22 09:32:17.967188864 +0000 UTC m=+0.015855301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:18 np0005591760 systemd[1]: Started Ceph mon.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: pidfile_write: ignore empty --pid-file
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: load: jerasure load: lrc 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: RocksDB version: 7.9.2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Git sha 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: DB SUMMARY
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: DB Session ID:  7MI1YN0I5S0TQSJVCTNU
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: CURRENT file:  CURRENT
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 46813 ; 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                         Options.error_if_exists: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.create_if_missing: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                                     Options.env: 0x55d6a441bc20
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                                Options.info_log: 0x55d6a5b1fac0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                              Options.statistics: (nil)
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                               Options.use_fsync: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                              Options.db_log_dir: 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                                 Options.wal_dir: 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                    Options.write_buffer_manager: 0x55d6a5b23900
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.unordered_write: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                               Options.row_cache: None
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                              Options.wal_filter: None
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.two_write_queues: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.wal_compression: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.atomic_flush: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.max_background_jobs: 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.max_background_compactions: -1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.max_subcompactions: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                          Options.max_open_files: -1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Compression algorithms supported:
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kZSTD supported: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kXpressCompression supported: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kZlibCompression supported: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:           Options.merge_operator: 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:        Options.compaction_filter: None
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d6a5b1f760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d6a5b429b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.compression: NoCompression
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.num_levels: 7
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 804a04cf-10ce-4c4c-aa43-09122b4af995
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074338052938, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074338054465, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 46708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 117, "table_properties": {"data_size": 45279, "index_size": 135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2753, "raw_average_key_size": 31, "raw_value_size": 43072, "raw_average_value_size": 489, "num_data_blocks": 7, "num_entries": 88, "num_filter_entries": 88, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074338, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074338054561, "job": 1, "event": "recovery_finished"}
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d6a5b44e00
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: DB pointer 0x55d6a5b54000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   47.51 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0   47.51 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 6.43 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 6.43 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d6a5b429b0#2 capacity: 512.00 MB usage: 0.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.33 KB,6.25849e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???) e1 preinit fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).mds e1 new map
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-22T09:32:16:777810+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : last_changed 2026-01-22T09:32:15.320230+0000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : created 2026-01-22T09:32:15.320230+0000
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap 
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.070210645 +0000 UTC m=+0.028721168 container create 95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34 (image=quay.io/ceph/ceph:v19, name=funny_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:18 np0005591760 systemd[1]: Started libpod-conmon-95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34.scope.
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 22 04:32:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0787461e9dfa7df5a76363e241d8541abdbd935863590a2bb74200f9f37cc1e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0787461e9dfa7df5a76363e241d8541abdbd935863590a2bb74200f9f37cc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0787461e9dfa7df5a76363e241d8541abdbd935863590a2bb74200f9f37cc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.14017748 +0000 UTC m=+0.098688032 container init 95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34 (image=quay.io/ceph/ceph:v19, name=funny_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.145113534 +0000 UTC m=+0.103624068 container start 95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34 (image=quay.io/ceph/ceph:v19, name=funny_moore, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.146284342 +0000 UTC m=+0.104794876 container attach 95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34 (image=quay.io/ceph/ceph:v19, name=funny_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.058732076 +0000 UTC m=+0.017242629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 22 04:32:18 np0005591760 systemd[1]: libpod-95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34.scope: Deactivated successfully.
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.301907219 +0000 UTC m=+0.260417752 container died 95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34 (image=quay.io/ceph/ceph:v19, name=funny_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:18 np0005591760 podman[74255]: 2026-01-22 09:32:18.321291221 +0000 UTC m=+0.279801755 container remove 95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34 (image=quay.io/ceph/ceph:v19, name=funny_moore, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:32:18 np0005591760 systemd[1]: libpod-conmon-95d49ba13e806033f0407295640239e7202459e73ac1ab1f71df31bd3de64c34.scope: Deactivated successfully.
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.362886208 +0000 UTC m=+0.027299827 container create a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48 (image=quay.io/ceph/ceph:v19, name=wonderful_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:18 np0005591760 systemd[1]: Started libpod-conmon-a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48.scope.
Jan 22 04:32:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ebdee30cda69af3e4412d14853dc4bd6666f35808917b576f45311b23bc173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ebdee30cda69af3e4412d14853dc4bd6666f35808917b576f45311b23bc173/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ebdee30cda69af3e4412d14853dc4bd6666f35808917b576f45311b23bc173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.40987306 +0000 UTC m=+0.074286679 container init a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48 (image=quay.io/ceph/ceph:v19, name=wonderful_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.413822986 +0000 UTC m=+0.078236595 container start a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48 (image=quay.io/ceph/ceph:v19, name=wonderful_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.415204832 +0000 UTC m=+0.079618441 container attach a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48 (image=quay.io/ceph/ceph:v19, name=wonderful_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.352397806 +0000 UTC m=+0.016811414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 22 04:32:18 np0005591760 systemd[1]: libpod-a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48.scope: Deactivated successfully.
Jan 22 04:32:18 np0005591760 conmon[74357]: conmon a02e1449630022f829bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48.scope/container/memory.events
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.569458446 +0000 UTC m=+0.233872055 container died a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48 (image=quay.io/ceph/ceph:v19, name=wonderful_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b0ebdee30cda69af3e4412d14853dc4bd6666f35808917b576f45311b23bc173-merged.mount: Deactivated successfully.
Jan 22 04:32:18 np0005591760 podman[74339]: 2026-01-22 09:32:18.587021587 +0000 UTC m=+0.251435196 container remove a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48 (image=quay.io/ceph/ceph:v19, name=wonderful_northcutt, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:32:18 np0005591760 systemd[1]: libpod-conmon-a02e1449630022f829bf78620c28398044b70cdd692f9b73373892ae8f2d7b48.scope: Deactivated successfully.
Jan 22 04:32:18 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:18 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:18 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:18 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:18 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:18 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:18 np0005591760 systemd[1]: Starting Ceph mgr.compute-0.rfmoog for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:32:19 np0005591760 podman[74506]: 2026-01-22 09:32:19.136823034 +0000 UTC m=+0.026095959 container create d582143798a4dc771769dd3b3a8a626cb43d5e458f2c9db677414fe39e87437b (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b6f48298a9eff645e74f8cad3c3a97eef717e0839d78fa210a6a505ffeacf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b6f48298a9eff645e74f8cad3c3a97eef717e0839d78fa210a6a505ffeacf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b6f48298a9eff645e74f8cad3c3a97eef717e0839d78fa210a6a505ffeacf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b6f48298a9eff645e74f8cad3c3a97eef717e0839d78fa210a6a505ffeacf/merged/var/lib/ceph/mgr/ceph-compute-0.rfmoog supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 podman[74506]: 2026-01-22 09:32:19.182223805 +0000 UTC m=+0.071496730 container init d582143798a4dc771769dd3b3a8a626cb43d5e458f2c9db677414fe39e87437b (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:19 np0005591760 podman[74506]: 2026-01-22 09:32:19.186266266 +0000 UTC m=+0.075539191 container start d582143798a4dc771769dd3b3a8a626cb43d5e458f2c9db677414fe39e87437b (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:32:19 np0005591760 bash[74506]: d582143798a4dc771769dd3b3a8a626cb43d5e458f2c9db677414fe39e87437b
Jan 22 04:32:19 np0005591760 podman[74506]: 2026-01-22 09:32:19.125680657 +0000 UTC m=+0.014953582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:19 np0005591760 systemd[1]: Started Ceph mgr.compute-0.rfmoog for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: pidfile_write: ignore empty --pid-file
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.235623326 +0000 UTC m=+0.029110051 container create d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b (image=quay.io/ceph/ceph:v19, name=recursing_ptolemy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'alerts'
Jan 22 04:32:19 np0005591760 systemd[1]: Started libpod-conmon-d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b.scope.
Jan 22 04:32:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6690a463960f813f4912e5fa8b4f0613783f714aa0ae3d167eaad2dc1b9cd8f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6690a463960f813f4912e5fa8b4f0613783f714aa0ae3d167eaad2dc1b9cd8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6690a463960f813f4912e5fa8b4f0613783f714aa0ae3d167eaad2dc1b9cd8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.287971134 +0000 UTC m=+0.081457869 container init d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b (image=quay.io/ceph/ceph:v19, name=recursing_ptolemy, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.292142007 +0000 UTC m=+0.085628731 container start d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b (image=quay.io/ceph/ceph:v19, name=recursing_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.293207736 +0000 UTC m=+0.086694461 container attach d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b (image=quay.io/ceph/ceph:v19, name=recursing_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.223766563 +0000 UTC m=+0.017253308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'balancer'
Jan 22 04:32:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:19.331+0000 7f23f3e5c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:32:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:19.401+0000 7f23f3e5c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:32:19 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'cephadm'
Jan 22 04:32:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 22 04:32:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119025483' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]: 
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]: {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "health": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "status": "HEALTH_OK",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "checks": {},
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "mutes": []
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "election_epoch": 5,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "quorum": [
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        0
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    ],
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "quorum_names": [
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "compute-0"
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    ],
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "quorum_age": 1,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "monmap": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "epoch": 1,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "min_mon_release_name": "squid",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_mons": 1
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "osdmap": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "epoch": 1,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_osds": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_up_osds": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "osd_up_since": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_in_osds": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "osd_in_since": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_remapped_pgs": 0
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "pgmap": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "pgs_by_state": [],
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_pgs": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_pools": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_objects": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "data_bytes": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "bytes_used": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "bytes_avail": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "bytes_total": 0
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "fsmap": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "epoch": 1,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "btime": "2026-01-22T09:32:16:777810+0000",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "by_rank": [],
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "up:standby": 0
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "mgrmap": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "available": false,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "num_standbys": 0,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "modules": [
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:            "iostat",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:            "nfs",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:            "restful"
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        ],
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "services": {}
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "servicemap": {
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "epoch": 1,
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "modified": "2026-01-22T09:32:16.778638+0000",
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:        "services": {}
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    },
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]:    "progress_events": {}
Jan 22 04:32:19 np0005591760 recursing_ptolemy[74556]: }
Jan 22 04:32:19 np0005591760 systemd[1]: libpod-d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b.scope: Deactivated successfully.
Jan 22 04:32:19 np0005591760 conmon[74556]: conmon d8f40e73936ebdf70d23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b.scope/container/memory.events
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.447614049 +0000 UTC m=+0.241100774 container died d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b (image=quay.io/ceph/ceph:v19, name=recursing_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:32:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f6690a463960f813f4912e5fa8b4f0613783f714aa0ae3d167eaad2dc1b9cd8f-merged.mount: Deactivated successfully.
Jan 22 04:32:19 np0005591760 podman[74523]: 2026-01-22 09:32:19.465406822 +0000 UTC m=+0.258893547 container remove d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b (image=quay.io/ceph/ceph:v19, name=recursing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:19 np0005591760 systemd[1]: libpod-conmon-d8f40e73936ebdf70d23e9d944cdfd5210774a48dd4aa1bf442c52d01a6d618b.scope: Deactivated successfully.
Jan 22 04:32:19 np0005591760 chronyd[58483]: Selected source 99.28.14.242 (pool.ntp.org)
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'crash'
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:20.083+0000 7f23f3e5c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'dashboard'
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'devicehealth'
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:20.616+0000 7f23f3e5c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  from numpy import show_config as show_numpy_config
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:20.760+0000 7f23f3e5c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'influx'
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:20.823+0000 7f23f3e5c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'insights'
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'iostat'
Jan 22 04:32:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:20.941+0000 7f23f3e5c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:32:20 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'k8sevents'
Jan 22 04:32:21 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'localpool'
Jan 22 04:32:21 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.510696688 +0000 UTC m=+0.027661205 container create cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c (image=quay.io/ceph/ceph:v19, name=condescending_wilson, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:21 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mirroring'
Jan 22 04:32:21 np0005591760 systemd[1]: Started libpod-conmon-cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c.scope.
Jan 22 04:32:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc7900a116635667642bd05d010c0366cf04633d39f180f2d7c31cd507ed01f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc7900a116635667642bd05d010c0366cf04633d39f180f2d7c31cd507ed01f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc7900a116635667642bd05d010c0366cf04633d39f180f2d7c31cd507ed01f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.570837138 +0000 UTC m=+0.087801655 container init cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c (image=quay.io/ceph/ceph:v19, name=condescending_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.575241722 +0000 UTC m=+0.092206229 container start cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c (image=quay.io/ceph/ceph:v19, name=condescending_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.576407138 +0000 UTC m=+0.093371645 container attach cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c (image=quay.io/ceph/ceph:v19, name=condescending_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.498916941 +0000 UTC m=+0.015881468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:21 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'nfs'
Jan 22 04:32:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 22 04:32:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860157267' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]: 
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]: {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "health": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "status": "HEALTH_OK",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "checks": {},
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "mutes": []
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "election_epoch": 5,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "quorum": [
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        0
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    ],
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "quorum_names": [
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "compute-0"
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    ],
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "quorum_age": 3,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "monmap": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "epoch": 1,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "min_mon_release_name": "squid",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_mons": 1
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "osdmap": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "epoch": 1,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_osds": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_up_osds": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "osd_up_since": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_in_osds": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "osd_in_since": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_remapped_pgs": 0
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "pgmap": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "pgs_by_state": [],
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_pgs": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_pools": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_objects": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "data_bytes": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "bytes_used": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "bytes_avail": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "bytes_total": 0
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "fsmap": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "epoch": 1,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "btime": "2026-01-22T09:32:16:777810+0000",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "by_rank": [],
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "up:standby": 0
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "mgrmap": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "available": false,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "num_standbys": 0,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "modules": [
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:            "iostat",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:            "nfs",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:            "restful"
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        ],
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "services": {}
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "servicemap": {
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "epoch": 1,
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "modified": "2026-01-22T09:32:16.778638+0000",
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:        "services": {}
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    },
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]:    "progress_events": {}
Jan 22 04:32:21 np0005591760 condescending_wilson[74617]: }
Jan 22 04:32:21 np0005591760 systemd[1]: libpod-cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c.scope: Deactivated successfully.
Jan 22 04:32:21 np0005591760 conmon[74617]: conmon cb77a4ad80af9fbe12c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c.scope/container/memory.events
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.728409469 +0000 UTC m=+0.245373975 container died cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c (image=quay.io/ceph/ceph:v19, name=condescending_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:21 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2dc7900a116635667642bd05d010c0366cf04633d39f180f2d7c31cd507ed01f-merged.mount: Deactivated successfully.
Jan 22 04:32:21 np0005591760 podman[74603]: 2026-01-22 09:32:21.757653691 +0000 UTC m=+0.274618188 container remove cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c (image=quay.io/ceph/ceph:v19, name=condescending_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:32:21 np0005591760 systemd[1]: libpod-conmon-cb77a4ad80af9fbe12c2dc6761455b61c85b126e277d55561e62a45ada47520c.scope: Deactivated successfully.
Jan 22 04:32:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:21.815+0000 7f23f3e5c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:32:21 np0005591760 ceph-mgr[74522]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:32:21 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'orchestrator'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.002+0000 7f23f3e5c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.068+0000 7f23f3e5c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_support'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.127+0000 7f23f3e5c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.195+0000 7f23f3e5c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'progress'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.256+0000 7f23f3e5c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'prometheus'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.550+0000 7f23f3e5c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rbd_support'
Jan 22 04:32:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:22.634+0000 7f23f3e5c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'restful'
Jan 22 04:32:22 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rgw'
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.011+0000 7f23f3e5c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rook'
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.489+0000 7f23f3e5c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'selftest'
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.550+0000 7f23f3e5c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'snap_schedule'
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.620+0000 7f23f3e5c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'stats'
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'status'
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.748+0000 7f23f3e5c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telegraf'
Jan 22 04:32:23 np0005591760 podman[74653]: 2026-01-22 09:32:23.802695272 +0000 UTC m=+0.028276328 container create 4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230 (image=quay.io/ceph/ceph:v19, name=distracted_noether, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.810+0000 7f23f3e5c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telemetry'
Jan 22 04:32:23 np0005591760 systemd[1]: Started libpod-conmon-4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230.scope.
Jan 22 04:32:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5050e77ba3ce8ce919fcf4c9fe101a7c613dcc71811a0ad0cae6453a0f71b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5050e77ba3ce8ce919fcf4c9fe101a7c613dcc71811a0ad0cae6453a0f71b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5050e77ba3ce8ce919fcf4c9fe101a7c613dcc71811a0ad0cae6453a0f71b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:23 np0005591760 podman[74653]: 2026-01-22 09:32:23.846509913 +0000 UTC m=+0.072090990 container init 4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230 (image=quay.io/ceph/ceph:v19, name=distracted_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:23 np0005591760 podman[74653]: 2026-01-22 09:32:23.850350372 +0000 UTC m=+0.075931430 container start 4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230 (image=quay.io/ceph/ceph:v19, name=distracted_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:32:23 np0005591760 podman[74653]: 2026-01-22 09:32:23.85139411 +0000 UTC m=+0.076975167 container attach 4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230 (image=quay.io/ceph/ceph:v19, name=distracted_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:32:23 np0005591760 podman[74653]: 2026-01-22 09:32:23.791478244 +0000 UTC m=+0.017059321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:23.943+0000 7f23f3e5c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:32:23 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 04:32:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 22 04:32:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519921260' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 04:32:23 np0005591760 distracted_noether[74666]: 
Jan 22 04:32:23 np0005591760 distracted_noether[74666]: {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "health": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "status": "HEALTH_OK",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "checks": {},
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "mutes": []
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "election_epoch": 5,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "quorum": [
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        0
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    ],
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "quorum_names": [
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "compute-0"
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    ],
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "quorum_age": 5,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "monmap": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "epoch": 1,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "min_mon_release_name": "squid",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_mons": 1
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "osdmap": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "epoch": 1,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_osds": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_up_osds": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "osd_up_since": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_in_osds": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "osd_in_since": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_remapped_pgs": 0
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "pgmap": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "pgs_by_state": [],
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_pgs": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_pools": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_objects": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "data_bytes": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "bytes_used": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "bytes_avail": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "bytes_total": 0
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "fsmap": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "epoch": 1,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "btime": "2026-01-22T09:32:16:777810+0000",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "by_rank": [],
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "up:standby": 0
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "mgrmap": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "available": false,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "num_standbys": 0,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "modules": [
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:            "iostat",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:            "nfs",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:            "restful"
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        ],
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "services": {}
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "servicemap": {
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "epoch": 1,
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "modified": "2026-01-22T09:32:16.778638+0000",
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:        "services": {}
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    },
Jan 22 04:32:23 np0005591760 distracted_noether[74666]:    "progress_events": {}
Jan 22 04:32:23 np0005591760 distracted_noether[74666]: }
Jan 22 04:32:24 np0005591760 systemd[1]: libpod-4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230.scope: Deactivated successfully.
Jan 22 04:32:24 np0005591760 podman[74653]: 2026-01-22 09:32:24.006445248 +0000 UTC m=+0.232026305 container died 4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230 (image=quay.io/ceph/ceph:v19, name=distracted_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-15d5050e77ba3ce8ce919fcf4c9fe101a7c613dcc71811a0ad0cae6453a0f71b-merged.mount: Deactivated successfully.
Jan 22 04:32:24 np0005591760 podman[74653]: 2026-01-22 09:32:24.023977581 +0000 UTC m=+0.249558628 container remove 4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230 (image=quay.io/ceph/ceph:v19, name=distracted_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 22 04:32:24 np0005591760 systemd[1]: libpod-conmon-4ebfe7050d022cf13ebcba82078ac57cd5c9d9da98de5c7740ef38cce2cc9230.scope: Deactivated successfully.
Jan 22 04:32:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:24.136+0000 7f23f3e5c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'volumes'
Jan 22 04:32:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:24.366+0000 7f23f3e5c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'zabbix'
Jan 22 04:32:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:24.427+0000 7f23f3e5c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: ms_deliver_dispatch: unhandled message 0x563430a889c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rfmoog
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map Activating!
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map I am now activating
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.rfmoog(active, starting, since 0.00456862s)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e1 all = 1
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rfmoog", "id": "compute-0.rfmoog"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rfmoog", "id": "compute-0.rfmoog"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Manager daemon compute-0.rfmoog is now available
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: balancer
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [balancer INFO root] Starting
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: crash
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:32:24
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [balancer INFO root] No pools available
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: devicehealth
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Starting
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: iostat
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: nfs
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: orchestrator
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: pg_autoscaler
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: progress
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [progress INFO root] Loading...
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [progress INFO root] No stored events to load
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded [] historic events
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] recovery thread starting
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] starting setup
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: rbd_support
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: restful
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [restful WARNING root] server not running: no certificate configured
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: status
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: telemetry
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] PerfHandler: starting
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TaskHandler: starting
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"} v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] setup complete
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 22 04:32:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:24 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: volumes
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: Activating manager daemon compute-0.rfmoog
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: Manager daemon compute-0.rfmoog is now available
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: from='mgr.14102 192.168.122.100:0/220309986' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:25 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.rfmoog(active, since 1.00917s)
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.066293212 +0000 UTC m=+0.026071793 container create df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516 (image=quay.io/ceph/ceph:v19, name=hungry_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:32:26 np0005591760 systemd[1]: Started libpod-conmon-df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516.scope.
Jan 22 04:32:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8ec5bbc939951bbb186d1b69860ee873bfb46ed455590c2fff13f580e8d544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8ec5bbc939951bbb186d1b69860ee873bfb46ed455590c2fff13f580e8d544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8ec5bbc939951bbb186d1b69860ee873bfb46ed455590c2fff13f580e8d544/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.104507595 +0000 UTC m=+0.064286176 container init df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516 (image=quay.io/ceph/ceph:v19, name=hungry_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.108197291 +0000 UTC m=+0.067975872 container start df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516 (image=quay.io/ceph/ceph:v19, name=hungry_allen, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.109141522 +0000 UTC m=+0.068920103 container attach df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516 (image=quay.io/ceph/ceph:v19, name=hungry_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.05581572 +0000 UTC m=+0.015594320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 22 04:32:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479435621' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 04:32:26 np0005591760 hungry_allen[74794]: 
Jan 22 04:32:26 np0005591760 hungry_allen[74794]: {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "health": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "status": "HEALTH_OK",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "checks": {},
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "mutes": []
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "election_epoch": 5,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "quorum": [
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        0
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    ],
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "quorum_names": [
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "compute-0"
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    ],
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "quorum_age": 8,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "monmap": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "epoch": 1,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "min_mon_release_name": "squid",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_mons": 1
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "osdmap": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "epoch": 1,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_osds": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_up_osds": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "osd_up_since": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_in_osds": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "osd_in_since": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_remapped_pgs": 0
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "pgmap": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "pgs_by_state": [],
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_pgs": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_pools": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_objects": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "data_bytes": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "bytes_used": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "bytes_avail": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "bytes_total": 0
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "fsmap": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "epoch": 1,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "btime": "2026-01-22T09:32:16:777810+0000",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "by_rank": [],
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "up:standby": 0
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "mgrmap": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "available": true,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "num_standbys": 0,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "modules": [
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:            "iostat",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:            "nfs",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:            "restful"
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        ],
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "services": {}
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "servicemap": {
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "epoch": 1,
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "modified": "2026-01-22T09:32:16.778638+0000",
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:        "services": {}
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    },
Jan 22 04:32:26 np0005591760 hungry_allen[74794]:    "progress_events": {}
Jan 22 04:32:26 np0005591760 hungry_allen[74794]: }
Jan 22 04:32:26 np0005591760 systemd[1]: libpod-df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516.scope: Deactivated successfully.
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.431742331 +0000 UTC m=+0.391520922 container died df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516 (image=quay.io/ceph/ceph:v19, name=hungry_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:32:26 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:26 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.rfmoog(active, since 2s)
Jan 22 04:32:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1f8ec5bbc939951bbb186d1b69860ee873bfb46ed455590c2fff13f580e8d544-merged.mount: Deactivated successfully.
Jan 22 04:32:26 np0005591760 podman[74781]: 2026-01-22 09:32:26.450623807 +0000 UTC m=+0.410402388 container remove df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516 (image=quay.io/ceph/ceph:v19, name=hungry_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 04:32:26 np0005591760 systemd[1]: libpod-conmon-df02afb05fbf44f88a7835ccd6cb509f659fce0595da2f8becc2d8f4e3402516.scope: Deactivated successfully.
Jan 22 04:32:26 np0005591760 podman[74830]: 2026-01-22 09:32:26.492403702 +0000 UTC m=+0.026871650 container create 9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9 (image=quay.io/ceph/ceph:v19, name=adoring_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:26 np0005591760 systemd[1]: Started libpod-conmon-9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9.scope.
Jan 22 04:32:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72c5ebd3e1ebdc5158b42810952e4aa626a50b30f8101845cfc2feb98621c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72c5ebd3e1ebdc5158b42810952e4aa626a50b30f8101845cfc2feb98621c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72c5ebd3e1ebdc5158b42810952e4aa626a50b30f8101845cfc2feb98621c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72c5ebd3e1ebdc5158b42810952e4aa626a50b30f8101845cfc2feb98621c8/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 podman[74830]: 2026-01-22 09:32:26.538997482 +0000 UTC m=+0.073465451 container init 9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9 (image=quay.io/ceph/ceph:v19, name=adoring_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:26 np0005591760 podman[74830]: 2026-01-22 09:32:26.542742562 +0000 UTC m=+0.077210510 container start 9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9 (image=quay.io/ceph/ceph:v19, name=adoring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:32:26 np0005591760 podman[74830]: 2026-01-22 09:32:26.543804615 +0000 UTC m=+0.078272563 container attach 9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9 (image=quay.io/ceph/ceph:v19, name=adoring_fermi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:32:26 np0005591760 podman[74830]: 2026-01-22 09:32:26.481175283 +0000 UTC m=+0.015643231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 22 04:32:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1412861506' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 04:32:26 np0005591760 adoring_fermi[74843]: 
Jan 22 04:32:26 np0005591760 adoring_fermi[74843]: [global]
Jan 22 04:32:26 np0005591760 adoring_fermi[74843]: #011fsid = 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:32:26 np0005591760 adoring_fermi[74843]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 22 04:32:26 np0005591760 systemd[1]: libpod-9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9.scope: Deactivated successfully.
Jan 22 04:32:26 np0005591760 podman[74869]: 2026-01-22 09:32:26.825064256 +0000 UTC m=+0.015358345 container died 9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9 (image=quay.io/ceph/ceph:v19, name=adoring_fermi, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:32:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0f72c5ebd3e1ebdc5158b42810952e4aa626a50b30f8101845cfc2feb98621c8-merged.mount: Deactivated successfully.
Jan 22 04:32:26 np0005591760 podman[74869]: 2026-01-22 09:32:26.841991247 +0000 UTC m=+0.032285317 container remove 9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9 (image=quay.io/ceph/ceph:v19, name=adoring_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:32:26 np0005591760 systemd[1]: libpod-conmon-9947228e5e5d192be2fe58ec497f302989987c7dbfbcc659440fbdf1ae7dc6c9.scope: Deactivated successfully.
Jan 22 04:32:26 np0005591760 podman[74881]: 2026-01-22 09:32:26.886430095 +0000 UTC m=+0.027097958 container create bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376 (image=quay.io/ceph/ceph:v19, name=quizzical_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 04:32:26 np0005591760 systemd[1]: Started libpod-conmon-bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376.scope.
Jan 22 04:32:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c12552ba91bd0a0f8e7cc7683f6041798746aa841df7dc75afa667123b2b159/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c12552ba91bd0a0f8e7cc7683f6041798746aa841df7dc75afa667123b2b159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c12552ba91bd0a0f8e7cc7683f6041798746aa841df7dc75afa667123b2b159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:26 np0005591760 podman[74881]: 2026-01-22 09:32:26.935981382 +0000 UTC m=+0.076649233 container init bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376 (image=quay.io/ceph/ceph:v19, name=quizzical_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:32:26 np0005591760 podman[74881]: 2026-01-22 09:32:26.94007013 +0000 UTC m=+0.080737991 container start bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376 (image=quay.io/ceph/ceph:v19, name=quizzical_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:32:26 np0005591760 podman[74881]: 2026-01-22 09:32:26.941291312 +0000 UTC m=+0.081959174 container attach bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376 (image=quay.io/ceph/ceph:v19, name=quizzical_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:32:26 np0005591760 podman[74881]: 2026-01-22 09:32:26.874695203 +0000 UTC m=+0.015363064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2866186674' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1412861506' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2866186674' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2866186674' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.rfmoog(active, since 3s)
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  1: '-n'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  2: 'mgr.compute-0.rfmoog'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  3: '-f'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  4: '--setuser'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  5: 'ceph'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  6: '--setgroup'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  7: 'ceph'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  8: '--default-log-to-file=false'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  9: '--default-log-to-journald=true'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr respawn  exe_path /proc/self/exe
Jan 22 04:32:27 np0005591760 systemd[1]: libpod-bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376.scope: Deactivated successfully.
Jan 22 04:32:27 np0005591760 podman[74920]: 2026-01-22 09:32:27.483118915 +0000 UTC m=+0.014916491 container died bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376 (image=quay.io/ceph/ceph:v19, name=quizzical_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:32:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0c12552ba91bd0a0f8e7cc7683f6041798746aa841df7dc75afa667123b2b159-merged.mount: Deactivated successfully.
Jan 22 04:32:27 np0005591760 podman[74920]: 2026-01-22 09:32:27.500436623 +0000 UTC m=+0.032234189 container remove bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376 (image=quay.io/ceph/ceph:v19, name=quizzical_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:32:27 np0005591760 systemd[1]: libpod-conmon-bc3a985d9aee982b2a149652b3118043c2929e4d25d224c52e341d1e13376376.scope: Deactivated successfully.
Jan 22 04:32:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setuser ceph since I am not root
Jan 22 04:32:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setgroup ceph since I am not root
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: pidfile_write: ignore empty --pid-file
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'alerts'
Jan 22 04:32:27 np0005591760 podman[74932]: 2026-01-22 09:32:27.545979864 +0000 UTC m=+0.027170435 container create c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de (image=quay.io/ceph/ceph:v19, name=happy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:32:27 np0005591760 systemd[1]: Started libpod-conmon-c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de.scope.
Jan 22 04:32:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2834917486c2d16a0737c557232b5e2be3c2b61a57d006a64d1547d25468428f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2834917486c2d16a0737c557232b5e2be3c2b61a57d006a64d1547d25468428f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2834917486c2d16a0737c557232b5e2be3c2b61a57d006a64d1547d25468428f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:27 np0005591760 podman[74932]: 2026-01-22 09:32:27.599883766 +0000 UTC m=+0.081074337 container init c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de (image=quay.io/ceph/ceph:v19, name=happy_roentgen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:32:27 np0005591760 podman[74932]: 2026-01-22 09:32:27.604089725 +0000 UTC m=+0.085280296 container start c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de (image=quay.io/ceph/ceph:v19, name=happy_roentgen, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:27 np0005591760 podman[74932]: 2026-01-22 09:32:27.605047701 +0000 UTC m=+0.086238282 container attach c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de (image=quay.io/ceph/ceph:v19, name=happy_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:32:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:27.631+0000 7f7f0e167140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'balancer'
Jan 22 04:32:27 np0005591760 podman[74932]: 2026-01-22 09:32:27.534914052 +0000 UTC m=+0.016104642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:32:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:27.702+0000 7f7f0e167140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:32:27 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'cephadm'
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 22 04:32:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/53377054' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 04:32:27 np0005591760 happy_roentgen[74967]: {
Jan 22 04:32:27 np0005591760 happy_roentgen[74967]:    "epoch": 5,
Jan 22 04:32:27 np0005591760 happy_roentgen[74967]:    "available": true,
Jan 22 04:32:27 np0005591760 happy_roentgen[74967]:    "active_name": "compute-0.rfmoog",
Jan 22 04:32:27 np0005591760 happy_roentgen[74967]:    "num_standby": 0
Jan 22 04:32:27 np0005591760 happy_roentgen[74967]: }
Jan 22 04:32:27 np0005591760 systemd[1]: libpod-c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de.scope: Deactivated successfully.
Jan 22 04:32:27 np0005591760 conmon[74967]: conmon c0c2bcdfc35f558e8d32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de.scope/container/memory.events
Jan 22 04:32:27 np0005591760 podman[74993]: 2026-01-22 09:32:27.938896014 +0000 UTC m=+0.016204841 container died c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de (image=quay.io/ceph/ceph:v19, name=happy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:32:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2834917486c2d16a0737c557232b5e2be3c2b61a57d006a64d1547d25468428f-merged.mount: Deactivated successfully.
Jan 22 04:32:27 np0005591760 podman[74993]: 2026-01-22 09:32:27.955391341 +0000 UTC m=+0.032700149 container remove c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de (image=quay.io/ceph/ceph:v19, name=happy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:27 np0005591760 systemd[1]: libpod-conmon-c0c2bcdfc35f558e8d32d3c4f223ceac9a4ea5067fabec7e0f4256f8d38794de.scope: Deactivated successfully.
Jan 22 04:32:27 np0005591760 podman[75004]: 2026-01-22 09:32:27.999679215 +0000 UTC m=+0.027025039 container create 423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1 (image=quay.io/ceph/ceph:v19, name=suspicious_wing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:32:28 np0005591760 systemd[1]: Started libpod-conmon-423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1.scope.
Jan 22 04:32:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c3913df046e319ea3b8dbd0fe3b4b3248007ace7d519dc78236e3e9c27b4c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c3913df046e319ea3b8dbd0fe3b4b3248007ace7d519dc78236e3e9c27b4c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c3913df046e319ea3b8dbd0fe3b4b3248007ace7d519dc78236e3e9c27b4c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:28 np0005591760 podman[75004]: 2026-01-22 09:32:28.048403632 +0000 UTC m=+0.075749466 container init 423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1 (image=quay.io/ceph/ceph:v19, name=suspicious_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:32:28 np0005591760 podman[75004]: 2026-01-22 09:32:28.052886384 +0000 UTC m=+0.080232208 container start 423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1 (image=quay.io/ceph/ceph:v19, name=suspicious_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:28 np0005591760 podman[75004]: 2026-01-22 09:32:28.053944338 +0000 UTC m=+0.081290162 container attach 423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1 (image=quay.io/ceph/ceph:v19, name=suspicious_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:32:28 np0005591760 podman[75004]: 2026-01-22 09:32:27.988931082 +0000 UTC m=+0.016276926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:28 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'crash'
Jan 22 04:32:28 np0005591760 ceph-mgr[74522]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:32:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:28.387+0000 7f7f0e167140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:32:28 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'dashboard'
Jan 22 04:32:28 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2866186674' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 22 04:32:28 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'devicehealth'
Jan 22 04:32:28 np0005591760 ceph-mgr[74522]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:32:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:28.925+0000 7f7f0e167140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:32:28 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 04:32:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 04:32:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 04:32:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  from numpy import show_config as show_numpy_config
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:32:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:29.071+0000 7f7f0e167140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'influx'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:32:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:29.133+0000 7f7f0e167140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'insights'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'iostat'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:32:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:29.251+0000 7f7f0e167140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'k8sevents'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'localpool'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mirroring'
Jan 22 04:32:29 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'nfs'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.096+0000 7f7f0e167140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'orchestrator'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.282+0000 7f7f0e167140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.348+0000 7f7f0e167140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_support'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.406+0000 7f7f0e167140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.474+0000 7f7f0e167140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'progress'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.536+0000 7f7f0e167140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'prometheus'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.832+0000 7f7f0e167140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rbd_support'
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:30.917+0000 7f7f0e167140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:32:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'restful'
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rgw'
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:31.295+0000 7f7f0e167140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rook'
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:31.774+0000 7f7f0e167140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'selftest'
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:31.835+0000 7f7f0e167140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'snap_schedule'
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:31.904+0000 7f7f0e167140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'stats'
Jan 22 04:32:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'status'
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:32.033+0000 7f7f0e167140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telegraf'
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:32.093+0000 7f7f0e167140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telemetry'
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:32.226+0000 7f7f0e167140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:32.415+0000 7f7f0e167140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'volumes'
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:32.648+0000 7f7f0e167140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'zabbix'
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:32:32.709+0000 7f7f0e167140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rfmoog restarted
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rfmoog
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: ms_deliver_dispatch: unhandled message 0x55cb6f3bad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map Activating!
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map I am now activating
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.rfmoog(active, starting, since 0.00574152s)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rfmoog", "id": "compute-0.rfmoog"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rfmoog", "id": "compute-0.rfmoog"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e1 all = 1
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Manager daemon compute-0.rfmoog is now available
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: balancer
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] Starting
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:32:32
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] No pools available
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: cephadm
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: crash
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: devicehealth
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Starting
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: iostat
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: nfs
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: orchestrator
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: pg_autoscaler
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: progress
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [progress INFO root] Loading...
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [progress INFO root] No stored events to load
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded [] historic events
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] recovery thread starting
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] starting setup
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: rbd_support
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: restful
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: status
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: telemetry
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [restful WARNING root] server not running: no certificate configured
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] PerfHandler: starting
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TaskHandler: starting
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"} v 0)
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: Active manager daemon compute-0.rfmoog restarted
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: Activating manager daemon compute-0.rfmoog
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: Manager daemon compute-0.rfmoog is now available
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:32 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] setup complete
Jan 22 04:32:32 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: volumes
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019940688 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.rfmoog(active, since 1.00996s)
Jan 22 04:32:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 22 04:32:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 22 04:32:33 np0005591760 suspicious_wing[75024]: {
Jan 22 04:32:33 np0005591760 suspicious_wing[75024]:    "mgrmap_epoch": 7,
Jan 22 04:32:33 np0005591760 suspicious_wing[75024]:    "initialized": true
Jan 22 04:32:33 np0005591760 suspicious_wing[75024]: }
Jan 22 04:32:33 np0005591760 systemd[1]: libpod-423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1.scope: Deactivated successfully.
Jan 22 04:32:33 np0005591760 podman[75004]: 2026-01-22 09:32:33.743693799 +0000 UTC m=+5.771039624 container died 423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1 (image=quay.io/ceph/ceph:v19, name=suspicious_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-45c3913df046e319ea3b8dbd0fe3b4b3248007ace7d519dc78236e3e9c27b4c3-merged.mount: Deactivated successfully.
Jan 22 04:32:33 np0005591760 podman[75004]: 2026-01-22 09:32:33.764233662 +0000 UTC m=+5.791579486 container remove 423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1 (image=quay.io/ceph/ceph:v19, name=suspicious_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: Found migration_current of "None". Setting to last migration.
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:33 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:33 np0005591760 systemd[1]: libpod-conmon-423e8282c32e3bfb73f5ebde52493092a5136bc135529515167da072ee45d5b1.scope: Deactivated successfully.
Jan 22 04:32:33 np0005591760 podman[75177]: 2026-01-22 09:32:33.814128956 +0000 UTC m=+0.034553912 container create 010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25 (image=quay.io/ceph/ceph:v19, name=nervous_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:33 np0005591760 systemd[1]: Started libpod-conmon-010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25.scope.
Jan 22 04:32:33 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dba806167da7f8d1fa58d0e023d0176bc60eabfc76e000b397e3aed0bd548c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dba806167da7f8d1fa58d0e023d0176bc60eabfc76e000b397e3aed0bd548c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24dba806167da7f8d1fa58d0e023d0176bc60eabfc76e000b397e3aed0bd548c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:33 np0005591760 podman[75177]: 2026-01-22 09:32:33.882279736 +0000 UTC m=+0.102704703 container init 010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25 (image=quay.io/ceph/ceph:v19, name=nervous_goldwasser, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:33 np0005591760 podman[75177]: 2026-01-22 09:32:33.887254485 +0000 UTC m=+0.107679432 container start 010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25 (image=quay.io/ceph/ceph:v19, name=nervous_goldwasser, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:33 np0005591760 podman[75177]: 2026-01-22 09:32:33.888432848 +0000 UTC m=+0.108857803 container attach 010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25 (image=quay.io/ceph/ceph:v19, name=nervous_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:32:33 np0005591760 podman[75177]: 2026-01-22 09:32:33.795736332 +0000 UTC m=+0.016161298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 04:32:34 np0005591760 systemd[1]: libpod-010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25.scope: Deactivated successfully.
Jan 22 04:32:34 np0005591760 podman[75177]: 2026-01-22 09:32:34.180247778 +0000 UTC m=+0.400672734 container died 010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25 (image=quay.io/ceph/ceph:v19, name=nervous_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:32:34] ENGINE Bus STARTING
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:32:34] ENGINE Bus STARTING
Jan 22 04:32:34 np0005591760 systemd[1]: var-lib-containers-storage-overlay-24dba806167da7f8d1fa58d0e023d0176bc60eabfc76e000b397e3aed0bd548c-merged.mount: Deactivated successfully.
Jan 22 04:32:34 np0005591760 podman[75177]: 2026-01-22 09:32:34.20652151 +0000 UTC m=+0.426946466 container remove 010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25 (image=quay.io/ceph/ceph:v19, name=nervous_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:34 np0005591760 systemd[1]: libpod-conmon-010c025aab4f26a0d4a9246c64ea59e5b47415528c9d1fb9eb9f75ae049bff25.scope: Deactivated successfully.
Jan 22 04:32:34 np0005591760 podman[75238]: 2026-01-22 09:32:34.248414238 +0000 UTC m=+0.028369626 container create afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e (image=quay.io/ceph/ceph:v19, name=jovial_knuth, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:34 np0005591760 systemd[1]: Started libpod-conmon-afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e.scope.
Jan 22 04:32:34 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:32:34] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:32:34] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dde90cffc428b2b4461e2fdccc3e4da377e03c8796d75e6fa364f23a2f11e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dde90cffc428b2b4461e2fdccc3e4da377e03c8796d75e6fa364f23a2f11e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dde90cffc428b2b4461e2fdccc3e4da377e03c8796d75e6fa364f23a2f11e1c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 podman[75238]: 2026-01-22 09:32:34.307234938 +0000 UTC m=+0.087190336 container init afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e (image=quay.io/ceph/ceph:v19, name=jovial_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:34 np0005591760 podman[75238]: 2026-01-22 09:32:34.311836514 +0000 UTC m=+0.091791902 container start afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e (image=quay.io/ceph/ceph:v19, name=jovial_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:34 np0005591760 podman[75238]: 2026-01-22 09:32:34.312898525 +0000 UTC m=+0.092853933 container attach afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e (image=quay.io/ceph/ceph:v19, name=jovial_knuth, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:32:34 np0005591760 podman[75238]: 2026-01-22 09:32:34.237549135 +0000 UTC m=+0.017504522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:32:34] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:32:34] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:32:34] ENGINE Bus STARTED
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:32:34] ENGINE Bus STARTED
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:32:34] ENGINE Client ('192.168.122.100', 59282) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:32:34] ENGINE Client ('192.168.122.100', 59282) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Set ssh ssh_user
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Set ssh ssh_config
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 22 04:32:34 np0005591760 jovial_knuth[75252]: ssh user set to ceph-admin. sudo will be used
Jan 22 04:32:34 np0005591760 systemd[1]: libpod-afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e.scope: Deactivated successfully.
Jan 22 04:32:34 np0005591760 podman[75290]: 2026-01-22 09:32:34.615715188 +0000 UTC m=+0.017158580 container died afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e (image=quay.io/ceph/ceph:v19, name=jovial_knuth, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:32:34 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2dde90cffc428b2b4461e2fdccc3e4da377e03c8796d75e6fa364f23a2f11e1c-merged.mount: Deactivated successfully.
Jan 22 04:32:34 np0005591760 podman[75290]: 2026-01-22 09:32:34.632245712 +0000 UTC m=+0.033689093 container remove afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e (image=quay.io/ceph/ceph:v19, name=jovial_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 22 04:32:34 np0005591760 systemd[1]: libpod-conmon-afa757cb6a6e5e2ff7b79af457e72f6257cdfcc077c2dbd1f0c0c4898598701e.scope: Deactivated successfully.
Jan 22 04:32:34 np0005591760 podman[75301]: 2026-01-22 09:32:34.676646107 +0000 UTC m=+0.027463717 container create a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151 (image=quay.io/ceph/ceph:v19, name=friendly_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:34 np0005591760 systemd[1]: Started libpod-conmon-a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151.scope.
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:34 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a243305e05a80e208188fc05c50226503e6fadef60b717359431ea7320dd377/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a243305e05a80e208188fc05c50226503e6fadef60b717359431ea7320dd377/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a243305e05a80e208188fc05c50226503e6fadef60b717359431ea7320dd377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a243305e05a80e208188fc05c50226503e6fadef60b717359431ea7320dd377/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a243305e05a80e208188fc05c50226503e6fadef60b717359431ea7320dd377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:34 np0005591760 podman[75301]: 2026-01-22 09:32:34.739247957 +0000 UTC m=+0.090065566 container init a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151 (image=quay.io/ceph/ceph:v19, name=friendly_hamilton, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:34 np0005591760 podman[75301]: 2026-01-22 09:32:34.742924066 +0000 UTC m=+0.093741676 container start a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151 (image=quay.io/ceph/ceph:v19, name=friendly_hamilton, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 04:32:34 np0005591760 podman[75301]: 2026-01-22 09:32:34.744304198 +0000 UTC m=+0.095121808 container attach a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151 (image=quay.io/ceph/ceph:v19, name=friendly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:34 np0005591760 podman[75301]: 2026-01-22 09:32:34.66540781 +0000 UTC m=+0.016225439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Set ssh private key
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 22 04:32:35 np0005591760 systemd[1]: libpod-a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151.scope: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75301]: 2026-01-22 09:32:35.014586054 +0000 UTC m=+0.365403674 container died a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151 (image=quay.io/ceph/ceph:v19, name=friendly_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:32:35 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9a243305e05a80e208188fc05c50226503e6fadef60b717359431ea7320dd377-merged.mount: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75301]: 2026-01-22 09:32:35.032247169 +0000 UTC m=+0.383064769 container remove a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151 (image=quay.io/ceph/ceph:v19, name=friendly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:32:35 np0005591760 systemd[1]: libpod-conmon-a3a285b0209d06e112f17b2712ace9e6ebe5472b9659ff52853f6ef84fae8151.scope: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.070850797 +0000 UTC m=+0.025808897 container create 55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f (image=quay.io/ceph/ceph:v19, name=romantic_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:35 np0005591760 systemd[1]: Started libpod-conmon-55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f.scope.
Jan 22 04:32:35 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db87b7891b822340503bf1a68645fc8b2694e010fcf67b77345c4dce086dbd6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db87b7891b822340503bf1a68645fc8b2694e010fcf67b77345c4dce086dbd6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db87b7891b822340503bf1a68645fc8b2694e010fcf67b77345c4dce086dbd6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db87b7891b822340503bf1a68645fc8b2694e010fcf67b77345c4dce086dbd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db87b7891b822340503bf1a68645fc8b2694e010fcf67b77345c4dce086dbd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.112036651 +0000 UTC m=+0.066994771 container init 55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f (image=quay.io/ceph/ceph:v19, name=romantic_mahavira, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.118165726 +0000 UTC m=+0.073123827 container start 55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f (image=quay.io/ceph/ceph:v19, name=romantic_mahavira, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.11942515 +0000 UTC m=+0.074383250 container attach 55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f (image=quay.io/ceph/ceph:v19, name=romantic_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.060096232 +0000 UTC m=+0.015054352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:32:34] ENGINE Bus STARTING
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:32:34] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:32:34] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:32:34] ENGINE Bus STARTED
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:32:34] ENGINE Client ('192.168.122.100', 59282) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 22 04:32:35 np0005591760 systemd[1]: libpod-55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f.scope: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.392942286 +0000 UTC m=+0.347900385 container died 55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f (image=quay.io/ceph/ceph:v19, name=romantic_mahavira, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 04:32:35 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0db87b7891b822340503bf1a68645fc8b2694e010fcf67b77345c4dce086dbd6-merged.mount: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75349]: 2026-01-22 09:32:35.412851679 +0000 UTC m=+0.367809780 container remove 55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f (image=quay.io/ceph/ceph:v19, name=romantic_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:35 np0005591760 systemd[1]: libpod-conmon-55ebede61afaeeee04771b76e63e8f3470f690b58dce5aa381438599aad6271f.scope: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.456136703 +0000 UTC m=+0.029887026 container create 8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652 (image=quay.io/ceph/ceph:v19, name=mystifying_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 04:32:35 np0005591760 systemd[1]: Started libpod-conmon-8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652.scope.
Jan 22 04:32:35 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2babb7d938f1730615a1b5ed535d33b99cac79b67d911ad881a4c11f59b00f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2babb7d938f1730615a1b5ed535d33b99cac79b67d911ad881a4c11f59b00f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2babb7d938f1730615a1b5ed535d33b99cac79b67d911ad881a4c11f59b00f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.512465194 +0000 UTC m=+0.086215518 container init 8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652 (image=quay.io/ceph/ceph:v19, name=mystifying_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.516180318 +0000 UTC m=+0.089930643 container start 8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652 (image=quay.io/ceph/ceph:v19, name=mystifying_rosalind, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.52056819 +0000 UTC m=+0.094318534 container attach 8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652 (image=quay.io/ceph/ceph:v19, name=mystifying_rosalind, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.444581448 +0000 UTC m=+0.018331791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rfmoog(active, since 2s)
Jan 22 04:32:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:35 np0005591760 mystifying_rosalind[75412]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8zNa3HKlwcoel+mI5KEJvwgUIfbtQYWF8DFbGiqthpkFRXKjwhZUABfNOMn7vki6eAJxz1NPnXa7asd5z0+79i+ds0YgAIyl7CVBQ8Gqvcpv+BhFhmIXk9K/omO6mjKU6TQ3ckkuJ8QUdoGbsT4P3WRd/JwjsF0N78PjnB9UOtw4E2FnA3C9jMCkoZlf/G8KTtO3AjFkf5aFWbjxPmDfJjoSY/tSYSa2w6t+x49Xr2I8mdy08NvSK00ISPUDMfcD+FapONAVQocYvxEI2Qsrlu8WaSe+KgBgLUPWDPYg2zyqpXbBXCZrL1AUvn9hCuw+xi60T6boJE4+v39fXNwJdEkvonuiJjvKzmPI1Dyw6TOB4WJYXKR57+A0ZwyWDfRWEh/xK3ekofpOL56lAN/QfeWCKbYi/i9dSk+9nC7MlVxsFOBerNWpHOZSWjCvD93K7pIjSQLqUzFCuGnhmmhOghzmgSTnEjYxju35FS3LL8zG2Pzx+mH1tLD8BHGnfHKU= zuul@controller
Jan 22 04:32:35 np0005591760 systemd[1]: libpod-8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652.scope: Deactivated successfully.
Jan 22 04:32:35 np0005591760 conmon[75412]: conmon 8202e608baf94ed686ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652.scope/container/memory.events
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.78751064 +0000 UTC m=+0.361260965 container died 8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652 (image=quay.io/ceph/ceph:v19, name=mystifying_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:35 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5a2babb7d938f1730615a1b5ed535d33b99cac79b67d911ad881a4c11f59b00f-merged.mount: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75398]: 2026-01-22 09:32:35.806211205 +0000 UTC m=+0.379961530 container remove 8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652 (image=quay.io/ceph/ceph:v19, name=mystifying_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:32:35 np0005591760 systemd[1]: libpod-conmon-8202e608baf94ed686eac18d4fd9aebecfddcf2de1ce6da43fca3e6d4ed04652.scope: Deactivated successfully.
Jan 22 04:32:35 np0005591760 podman[75447]: 2026-01-22 09:32:35.848346891 +0000 UTC m=+0.027897073 container create e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824 (image=quay.io/ceph/ceph:v19, name=hungry_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:35 np0005591760 systemd[1]: Started libpod-conmon-e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824.scope.
Jan 22 04:32:35 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af656f7b2d1c55c6a3422840e4af3fbe6753ff5001997b8a853d1e569f4c7faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af656f7b2d1c55c6a3422840e4af3fbe6753ff5001997b8a853d1e569f4c7faa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af656f7b2d1c55c6a3422840e4af3fbe6753ff5001997b8a853d1e569f4c7faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:35 np0005591760 podman[75447]: 2026-01-22 09:32:35.900686604 +0000 UTC m=+0.080236796 container init e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824 (image=quay.io/ceph/ceph:v19, name=hungry_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:35 np0005591760 podman[75447]: 2026-01-22 09:32:35.904755044 +0000 UTC m=+0.084305236 container start e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824 (image=quay.io/ceph/ceph:v19, name=hungry_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:32:35 np0005591760 podman[75447]: 2026-01-22 09:32:35.907186789 +0000 UTC m=+0.086736971 container attach e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824 (image=quay.io/ceph/ceph:v19, name=hungry_hypatia, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:35 np0005591760 podman[75447]: 2026-01-22 09:32:35.837475386 +0000 UTC m=+0.017025588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:36 np0005591760 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 04:32:36 np0005591760 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 04:32:36 np0005591760 systemd-logind[747]: New session 21 of user ceph-admin.
Jan 22 04:32:36 np0005591760 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: Set ssh ssh_user
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: Set ssh ssh_config
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: ssh user set to ceph-admin. sudo will be used
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: Set ssh ssh_identity_key
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: Set ssh private key
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:36 np0005591760 ceph-mon[74254]: Set ssh ssh_identity_pub
Jan 22 04:32:36 np0005591760 systemd[1]: Starting User Manager for UID 42477...
Jan 22 04:32:36 np0005591760 systemd[75491]: Queued start job for default target Main User Target.
Jan 22 04:32:36 np0005591760 systemd[75491]: Created slice User Application Slice.
Jan 22 04:32:36 np0005591760 systemd[75491]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 04:32:36 np0005591760 systemd[75491]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 04:32:36 np0005591760 systemd[75491]: Reached target Paths.
Jan 22 04:32:36 np0005591760 systemd[75491]: Reached target Timers.
Jan 22 04:32:36 np0005591760 systemd[75491]: Starting D-Bus User Message Bus Socket...
Jan 22 04:32:36 np0005591760 systemd[75491]: Starting Create User's Volatile Files and Directories...
Jan 22 04:32:36 np0005591760 systemd[75491]: Finished Create User's Volatile Files and Directories.
Jan 22 04:32:36 np0005591760 systemd[75491]: Listening on D-Bus User Message Bus Socket.
Jan 22 04:32:36 np0005591760 systemd[75491]: Reached target Sockets.
Jan 22 04:32:36 np0005591760 systemd[75491]: Reached target Basic System.
Jan 22 04:32:36 np0005591760 systemd[75491]: Reached target Main User Target.
Jan 22 04:32:36 np0005591760 systemd[75491]: Startup finished in 88ms.
Jan 22 04:32:36 np0005591760 systemd[1]: Started User Manager for UID 42477.
Jan 22 04:32:36 np0005591760 systemd[1]: Started Session 21 of User ceph-admin.
Jan 22 04:32:36 np0005591760 systemd-logind[747]: New session 23 of user ceph-admin.
Jan 22 04:32:36 np0005591760 systemd[1]: Started Session 23 of User ceph-admin.
Jan 22 04:32:36 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:36 np0005591760 systemd-logind[747]: New session 24 of user ceph-admin.
Jan 22 04:32:36 np0005591760 systemd[1]: Started Session 24 of User ceph-admin.
Jan 22 04:32:37 np0005591760 systemd-logind[747]: New session 25 of user ceph-admin.
Jan 22 04:32:37 np0005591760 systemd[1]: Started Session 25 of User ceph-admin.
Jan 22 04:32:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 22 04:32:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 22 04:32:37 np0005591760 systemd-logind[747]: New session 26 of user ceph-admin.
Jan 22 04:32:37 np0005591760 systemd[1]: Started Session 26 of User ceph-admin.
Jan 22 04:32:37 np0005591760 systemd-logind[747]: New session 27 of user ceph-admin.
Jan 22 04:32:37 np0005591760 systemd[1]: Started Session 27 of User ceph-admin.
Jan 22 04:32:37 np0005591760 systemd-logind[747]: New session 28 of user ceph-admin.
Jan 22 04:32:37 np0005591760 systemd[1]: Started Session 28 of User ceph-admin.
Jan 22 04:32:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053268 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:32:38 np0005591760 systemd-logind[747]: New session 29 of user ceph-admin.
Jan 22 04:32:38 np0005591760 systemd[1]: Started Session 29 of User ceph-admin.
Jan 22 04:32:38 np0005591760 ceph-mon[74254]: Deploying cephadm binary to compute-0
Jan 22 04:32:38 np0005591760 systemd-logind[747]: New session 30 of user ceph-admin.
Jan 22 04:32:38 np0005591760 systemd[1]: Started Session 30 of User ceph-admin.
Jan 22 04:32:38 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:38 np0005591760 systemd-logind[747]: New session 31 of user ceph-admin.
Jan 22 04:32:38 np0005591760 systemd[1]: Started Session 31 of User ceph-admin.
Jan 22 04:32:39 np0005591760 systemd-logind[747]: New session 32 of user ceph-admin.
Jan 22 04:32:39 np0005591760 systemd[1]: Started Session 32 of User ceph-admin.
Jan 22 04:32:39 np0005591760 systemd-logind[747]: New session 33 of user ceph-admin.
Jan 22 04:32:39 np0005591760 systemd[1]: Started Session 33 of User ceph-admin.
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Added host compute-0
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 22 04:32:40 np0005591760 hungry_hypatia[75461]: Added host 'compute-0' with addr '192.168.122.100'
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 04:32:40 np0005591760 podman[75447]: 2026-01-22 09:32:40.155190386 +0000 UTC m=+4.334740569 container died e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824 (image=quay.io/ceph/ceph:v19, name=hungry_hypatia, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:40 np0005591760 systemd[1]: libpod-e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824.scope: Deactivated successfully.
Jan 22 04:32:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-af656f7b2d1c55c6a3422840e4af3fbe6753ff5001997b8a853d1e569f4c7faa-merged.mount: Deactivated successfully.
Jan 22 04:32:40 np0005591760 podman[75447]: 2026-01-22 09:32:40.180022511 +0000 UTC m=+4.359572692 container remove e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824 (image=quay.io/ceph/ceph:v19, name=hungry_hypatia, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:40 np0005591760 systemd[1]: libpod-conmon-e40eec9b32452a3e699e1185df699b3ec81fdc42ba290ca458f057f5ebb06824.scope: Deactivated successfully.
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.228892342 +0000 UTC m=+0.030340439 container create 66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682 (image=quay.io/ceph/ceph:v19, name=cranky_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 04:32:40 np0005591760 systemd[1]: Started libpod-conmon-66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682.scope.
Jan 22 04:32:40 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbfe079290d7cfbbe0e621aaa57741fa4e6858270ac66bc8782ed0500b08b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbfe079290d7cfbbe0e621aaa57741fa4e6858270ac66bc8782ed0500b08b7f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbfe079290d7cfbbe0e621aaa57741fa4e6858270ac66bc8782ed0500b08b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.28693188 +0000 UTC m=+0.088379978 container init 66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682 (image=quay.io/ceph/ceph:v19, name=cranky_boyd, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.290909319 +0000 UTC m=+0.092357416 container start 66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682 (image=quay.io/ceph/ceph:v19, name=cranky_boyd, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.292137905 +0000 UTC m=+0.093586002 container attach 66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682 (image=quay.io/ceph/ceph:v19, name=cranky_boyd, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.215091715 +0000 UTC m=+0.016539832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:40 np0005591760 cranky_boyd[75914]: Scheduled mon update...
Jan 22 04:32:40 np0005591760 systemd[1]: libpod-66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682.scope: Deactivated successfully.
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.572997202 +0000 UTC m=+0.374445298 container died 66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682 (image=quay.io/ceph/ceph:v19, name=cranky_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:32:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ecbfe079290d7cfbbe0e621aaa57741fa4e6858270ac66bc8782ed0500b08b7f-merged.mount: Deactivated successfully.
Jan 22 04:32:40 np0005591760 podman[75877]: 2026-01-22 09:32:40.596279444 +0000 UTC m=+0.397727541 container remove 66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682 (image=quay.io/ceph/ceph:v19, name=cranky_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:40 np0005591760 systemd[1]: libpod-conmon-66a520cad2cf083bb1d50240b12d778a0418259695eda6a1f75281b5c590a682.scope: Deactivated successfully.
Jan 22 04:32:40 np0005591760 podman[75971]: 2026-01-22 09:32:40.636201116 +0000 UTC m=+0.025392993 container create 206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab (image=quay.io/ceph/ceph:v19, name=dreamy_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:32:40 np0005591760 systemd[1]: Started libpod-conmon-206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab.scope.
Jan 22 04:32:40 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09377c2ebc6f9d7ef0297c7628d2b6860b837d834ce4f6e40e12c7207a961adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09377c2ebc6f9d7ef0297c7628d2b6860b837d834ce4f6e40e12c7207a961adf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09377c2ebc6f9d7ef0297c7628d2b6860b837d834ce4f6e40e12c7207a961adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:40 np0005591760 podman[75971]: 2026-01-22 09:32:40.684631809 +0000 UTC m=+0.073823696 container init 206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab (image=quay.io/ceph/ceph:v19, name=dreamy_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:40 np0005591760 podman[75971]: 2026-01-22 09:32:40.68828115 +0000 UTC m=+0.077473027 container start 206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab (image=quay.io/ceph/ceph:v19, name=dreamy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:32:40 np0005591760 podman[75971]: 2026-01-22 09:32:40.693173592 +0000 UTC m=+0.082365490 container attach 206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab (image=quay.io/ceph/ceph:v19, name=dreamy_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:40 np0005591760 podman[75971]: 2026-01-22 09:32:40.626532669 +0000 UTC m=+0.015724566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 22 04:32:40 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 22 04:32:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:40 np0005591760 dreamy_heyrovsky[75985]: Scheduled mgr update...
Jan 22 04:32:40 np0005591760 systemd[1]: libpod-206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab.scope: Deactivated successfully.
Jan 22 04:32:40 np0005591760 podman[76011]: 2026-01-22 09:32:40.999197541 +0000 UTC m=+0.018471044 container died 206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab (image=quay.io/ceph/ceph:v19, name=dreamy_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-09377c2ebc6f9d7ef0297c7628d2b6860b837d834ce4f6e40e12c7207a961adf-merged.mount: Deactivated successfully.
Jan 22 04:32:41 np0005591760 podman[76011]: 2026-01-22 09:32:41.015979909 +0000 UTC m=+0.035253412 container remove 206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab (image=quay.io/ceph/ceph:v19, name=dreamy_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:41 np0005591760 podman[75948]: 2026-01-22 09:32:41.022473281 +0000 UTC m=+0.578793996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:41 np0005591760 systemd[1]: libpod-conmon-206d659d692af7601b3b66cef654729e5218130ab010b2646c90baa2514855ab.scope: Deactivated successfully.
Jan 22 04:32:41 np0005591760 podman[76024]: 2026-01-22 09:32:41.06454156 +0000 UTC m=+0.027235977 container create 74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b (image=quay.io/ceph/ceph:v19, name=wizardly_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 04:32:41 np0005591760 systemd[1]: Started libpod-conmon-74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b.scope.
Jan 22 04:32:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf25788f02e75cdca4c86853b7fa786481c89a6fc0e191fa5c65c1cf6262ee87/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf25788f02e75cdca4c86853b7fa786481c89a6fc0e191fa5c65c1cf6262ee87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf25788f02e75cdca4c86853b7fa786481c89a6fc0e191fa5c65c1cf6262ee87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:41 np0005591760 podman[76043]: 2026-01-22 09:32:41.126696456 +0000 UTC m=+0.059946294 container create a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445 (image=quay.io/ceph/ceph:v19, name=magical_joliot, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:32:41 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:41 np0005591760 ceph-mon[74254]: Added host compute-0
Jan 22 04:32:41 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:41 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:41 np0005591760 podman[76024]: 2026-01-22 09:32:41.053090071 +0000 UTC m=+0.015784508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:41 np0005591760 podman[76043]: 2026-01-22 09:32:41.083495843 +0000 UTC m=+0.016745702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:42 np0005591760 ceph-mon[74254]: Saving service mon spec with placement count:5
Jan 22 04:32:42 np0005591760 ceph-mon[74254]: Saving service mgr spec with placement count:2
Jan 22 04:32:42 np0005591760 podman[76024]: 2026-01-22 09:32:42.528431561 +0000 UTC m=+1.491125978 container init 74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b (image=quay.io/ceph/ceph:v19, name=wizardly_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:32:42 np0005591760 podman[76024]: 2026-01-22 09:32:42.53331078 +0000 UTC m=+1.496005196 container start 74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b (image=quay.io/ceph/ceph:v19, name=wizardly_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:32:42 np0005591760 podman[76024]: 2026-01-22 09:32:42.534483641 +0000 UTC m=+1.497178058 container attach 74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b (image=quay.io/ceph/ceph:v19, name=wizardly_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:42 np0005591760 systemd[1]: Started libpod-conmon-a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445.scope.
Jan 22 04:32:42 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:42 np0005591760 podman[76043]: 2026-01-22 09:32:42.609360311 +0000 UTC m=+1.542610149 container init a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445 (image=quay.io/ceph/ceph:v19, name=magical_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:32:42 np0005591760 podman[76043]: 2026-01-22 09:32:42.613246827 +0000 UTC m=+1.546496665 container start a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445 (image=quay.io/ceph/ceph:v19, name=magical_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:42 np0005591760 podman[76043]: 2026-01-22 09:32:42.61448403 +0000 UTC m=+1.547733869 container attach a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445 (image=quay.io/ceph/ceph:v19, name=magical_joliot, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:32:42 np0005591760 magical_joliot[76066]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 22 04:32:42 np0005591760 systemd[1]: libpod-a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445.scope: Deactivated successfully.
Jan 22 04:32:42 np0005591760 podman[76043]: 2026-01-22 09:32:42.693215959 +0000 UTC m=+1.626465807 container died a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445 (image=quay.io/ceph/ceph:v19, name=magical_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:42 np0005591760 systemd[1]: var-lib-containers-storage-overlay-27fa7bb5e2ed2a8d0a0037e7d5f30c031f05363fde3a4a34ecf1ce8721d5d890-merged.mount: Deactivated successfully.
Jan 22 04:32:42 np0005591760 podman[76043]: 2026-01-22 09:32:42.710926257 +0000 UTC m=+1.644176095 container remove a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445 (image=quay.io/ceph/ceph:v19, name=magical_joliot, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:42 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:42 np0005591760 systemd[1]: libpod-conmon-a854930730d90dd56335b4401c67427919aa1978599ae476fce6cbdc2c3f1445.scope: Deactivated successfully.
Jan 22 04:32:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 22 04:32:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:42 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:42 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service crash spec with placement *
Jan 22 04:32:42 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 22 04:32:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:32:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:42 np0005591760 wizardly_elgamal[76057]: Scheduled crash update...
Jan 22 04:32:42 np0005591760 systemd[1]: libpod-74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b.scope: Deactivated successfully.
Jan 22 04:32:42 np0005591760 podman[76024]: 2026-01-22 09:32:42.831754556 +0000 UTC m=+1.794448974 container died 74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b (image=quay.io/ceph/ceph:v19, name=wizardly_elgamal, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:42 np0005591760 systemd[1]: var-lib-containers-storage-overlay-cf25788f02e75cdca4c86853b7fa786481c89a6fc0e191fa5c65c1cf6262ee87-merged.mount: Deactivated successfully.
Jan 22 04:32:42 np0005591760 podman[76024]: 2026-01-22 09:32:42.85428922 +0000 UTC m=+1.816983637 container remove 74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b (image=quay.io/ceph/ceph:v19, name=wizardly_elgamal, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:42 np0005591760 systemd[1]: libpod-conmon-74fd54eb52c570ae04b874010e77eedfb4a6ff39b96fbfd643da06fdbd716b0b.scope: Deactivated successfully.
Jan 22 04:32:42 np0005591760 podman[76161]: 2026-01-22 09:32:42.898832435 +0000 UTC m=+0.027790272 container create eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54 (image=quay.io/ceph/ceph:v19, name=eager_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:32:42 np0005591760 systemd[1]: Started libpod-conmon-eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54.scope.
Jan 22 04:32:42 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26536465a9b6d589d1d0146fd96f8dd678d1da19b137c6d2e8e8910cb11e57ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26536465a9b6d589d1d0146fd96f8dd678d1da19b137c6d2e8e8910cb11e57ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26536465a9b6d589d1d0146fd96f8dd678d1da19b137c6d2e8e8910cb11e57ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:42 np0005591760 podman[76161]: 2026-01-22 09:32:42.952662237 +0000 UTC m=+0.081620074 container init eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54 (image=quay.io/ceph/ceph:v19, name=eager_villani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:42 np0005591760 podman[76161]: 2026-01-22 09:32:42.957176998 +0000 UTC m=+0.086134826 container start eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54 (image=quay.io/ceph/ceph:v19, name=eager_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:32:42 np0005591760 podman[76161]: 2026-01-22 09:32:42.958358405 +0000 UTC m=+0.087316242 container attach eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54 (image=quay.io/ceph/ceph:v19, name=eager_villani, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:42 np0005591760 podman[76161]: 2026-01-22 09:32:42.887646306 +0000 UTC m=+0.016604153 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3155800998' entity='client.admin' 
Jan 22 04:32:43 np0005591760 systemd[1]: libpod-eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54.scope: Deactivated successfully.
Jan 22 04:32:43 np0005591760 podman[76161]: 2026-01-22 09:32:43.246056695 +0000 UTC m=+0.375014522 container died eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54 (image=quay.io/ceph/ceph:v19, name=eager_villani, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 04:32:43 np0005591760 systemd[1]: var-lib-containers-storage-overlay-26536465a9b6d589d1d0146fd96f8dd678d1da19b137c6d2e8e8910cb11e57ae-merged.mount: Deactivated successfully.
Jan 22 04:32:43 np0005591760 podman[76161]: 2026-01-22 09:32:43.266579125 +0000 UTC m=+0.395536951 container remove eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54 (image=quay.io/ceph/ceph:v19, name=eager_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 04:32:43 np0005591760 systemd[1]: libpod-conmon-eb174ce493e9d17f50dcf5b03f3f3aecfe2d5682d01a1f19749f13276d293b54.scope: Deactivated successfully.
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.314068363 +0000 UTC m=+0.027842832 container create c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915 (image=quay.io/ceph/ceph:v19, name=mystifying_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 04:32:43 np0005591760 systemd[1]: Started libpod-conmon-c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915.scope.
Jan 22 04:32:43 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b069ddef3393f88b0bd9823a4c1ac53bd1faf5aff8afbd6a2e99f591fa4acc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b069ddef3393f88b0bd9823a4c1ac53bd1faf5aff8afbd6a2e99f591fa4acc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b069ddef3393f88b0bd9823a4c1ac53bd1faf5aff8afbd6a2e99f591fa4acc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.370691652 +0000 UTC m=+0.084466131 container init c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915 (image=quay.io/ceph/ceph:v19, name=mystifying_snyder, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.376678528 +0000 UTC m=+0.090452987 container start c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915 (image=quay.io/ceph/ceph:v19, name=mystifying_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.379089072 +0000 UTC m=+0.092863531 container attach c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915 (image=quay.io/ceph/ceph:v19, name=mystifying_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.302999886 +0000 UTC m=+0.016774355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:43 np0005591760 podman[76371]: 2026-01-22 09:32:43.55472087 +0000 UTC m=+0.036398250 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:32:43 np0005591760 podman[76371]: 2026-01-22 09:32:43.634940844 +0000 UTC m=+0.116618204 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:43 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 systemd[1]: libpod-c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915.scope: Deactivated successfully.
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.670602435 +0000 UTC m=+0.384376894 container died c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915 (image=quay.io/ceph/ceph:v19, name=mystifying_snyder, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:43 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1b069ddef3393f88b0bd9823a4c1ac53bd1faf5aff8afbd6a2e99f591fa4acc3-merged.mount: Deactivated successfully.
Jan 22 04:32:43 np0005591760 podman[76278]: 2026-01-22 09:32:43.689625497 +0000 UTC m=+0.403399957 container remove c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915 (image=quay.io/ceph/ceph:v19, name=mystifying_snyder, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:43 np0005591760 systemd[1]: libpod-conmon-c28a53bfc709b8df0f39b5cac9cc868fff5dd8ec22f45e8a7f66093f501c9915.scope: Deactivated successfully.
Jan 22 04:32:43 np0005591760 podman[76421]: 2026-01-22 09:32:43.739436864 +0000 UTC m=+0.029958861 container create 458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0 (image=quay.io/ceph/ceph:v19, name=bold_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: Saving service crash spec with placement *
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3155800998' entity='client.admin' 
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:43 np0005591760 systemd[1]: Started libpod-conmon-458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0.scope.
Jan 22 04:32:43 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ddfa974061f1263028076bfd8cc6233fd70c398d10b533147cb6d81e75b8cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ddfa974061f1263028076bfd8cc6233fd70c398d10b533147cb6d81e75b8cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ddfa974061f1263028076bfd8cc6233fd70c398d10b533147cb6d81e75b8cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:43 np0005591760 podman[76421]: 2026-01-22 09:32:43.795824859 +0000 UTC m=+0.086346855 container init 458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0 (image=quay.io/ceph/ceph:v19, name=bold_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:43 np0005591760 podman[76421]: 2026-01-22 09:32:43.80019067 +0000 UTC m=+0.090712665 container start 458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0 (image=quay.io/ceph/ceph:v19, name=bold_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:43 np0005591760 podman[76421]: 2026-01-22 09:32:43.801526418 +0000 UTC m=+0.092048414 container attach 458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0 (image=quay.io/ceph/ceph:v19, name=bold_elgamal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:43 np0005591760 podman[76421]: 2026-01-22 09:32:43.728553305 +0000 UTC m=+0.019075321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:44 np0005591760 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76521 (sysctl)
Jan 22 04:32:44 np0005591760 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 22 04:32:44 np0005591760 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 22 04:32:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:44 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Added label _admin to host compute-0
Jan 22 04:32:44 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 22 04:32:44 np0005591760 bold_elgamal[76445]: Added label _admin to host compute-0
Jan 22 04:32:44 np0005591760 systemd[1]: libpod-458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0.scope: Deactivated successfully.
Jan 22 04:32:44 np0005591760 podman[76528]: 2026-01-22 09:32:44.108547184 +0000 UTC m=+0.016639480 container died 458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0 (image=quay.io/ceph/ceph:v19, name=bold_elgamal, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:44 np0005591760 systemd[1]: var-lib-containers-storage-overlay-91ddfa974061f1263028076bfd8cc6233fd70c398d10b533147cb6d81e75b8cd-merged.mount: Deactivated successfully.
Jan 22 04:32:44 np0005591760 podman[76528]: 2026-01-22 09:32:44.130535948 +0000 UTC m=+0.038628234 container remove 458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0 (image=quay.io/ceph/ceph:v19, name=bold_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:32:44 np0005591760 systemd[1]: libpod-conmon-458c52e058ee6e29ac29e22b00ced48c095e1a18f3ea567fb43e3cb0f1ba00a0.scope: Deactivated successfully.
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.177902816 +0000 UTC m=+0.027906331 container create 0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540 (image=quay.io/ceph/ceph:v19, name=angry_swanson, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:44 np0005591760 systemd[1]: Started libpod-conmon-0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540.scope.
Jan 22 04:32:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7d0ba759cb5ddce44c06683b2fe83f6b99dc22d0bcae9b701ebe4bbac06653/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7d0ba759cb5ddce44c06683b2fe83f6b99dc22d0bcae9b701ebe4bbac06653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7d0ba759cb5ddce44c06683b2fe83f6b99dc22d0bcae9b701ebe4bbac06653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.226319133 +0000 UTC m=+0.076322668 container init 0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540 (image=quay.io/ceph/ceph:v19, name=angry_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.231109553 +0000 UTC m=+0.081113068 container start 0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540 (image=quay.io/ceph/ceph:v19, name=angry_swanson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.232230848 +0000 UTC m=+0.082234361 container attach 0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540 (image=quay.io/ceph/ceph:v19, name=angry_swanson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.166210274 +0000 UTC m=+0.016213807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399471304' entity='client.admin' 
Jan 22 04:32:44 np0005591760 angry_swanson[76557]: set mgr/dashboard/cluster/status
Jan 22 04:32:44 np0005591760 systemd[1]: libpod-0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540.scope: Deactivated successfully.
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.666225752 +0000 UTC m=+0.516229265 container died 0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540 (image=quay.io/ceph/ceph:v19, name=angry_swanson, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:44 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ff7d0ba759cb5ddce44c06683b2fe83f6b99dc22d0bcae9b701ebe4bbac06653-merged.mount: Deactivated successfully.
Jan 22 04:32:44 np0005591760 podman[76542]: 2026-01-22 09:32:44.686584533 +0000 UTC m=+0.536588047 container remove 0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540 (image=quay.io/ceph/ceph:v19, name=angry_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:44 np0005591760 systemd[1]: libpod-conmon-0e3844a9d80c9ebea9be58289c5c05618f79e8b65ba9563143831c77ee712540.scope: Deactivated successfully.
Jan 22 04:32:44 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: Added label _admin to host compute-0
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:44 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2399471304' entity='client.admin' 
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:45.005404705 +0000 UTC m=+0.027742482 container create 139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:45 np0005591760 systemd[1]: Started libpod-conmon-139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758.scope.
Jan 22 04:32:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:45.058808082 +0000 UTC m=+0.081145860 container init 139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:45.063021487 +0000 UTC m=+0.085359263 container start 139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:45.064066837 +0000 UTC m=+0.086404614 container attach 139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 22 04:32:45 np0005591760 priceless_hodgkin[76791]: 167 167
Jan 22 04:32:45 np0005591760 systemd[1]: libpod-139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758.scope: Deactivated successfully.
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:45.066476129 +0000 UTC m=+0.088813906 container died 139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:32:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-15ca0e2aaaf6dd96e3100301b315b28e36891c525d7dcadc4b2f510a88e4c8c5-merged.mount: Deactivated successfully.
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:45.081644344 +0000 UTC m=+0.103982122 container remove 139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:45 np0005591760 podman[76757]: 2026-01-22 09:32:44.993114625 +0000 UTC m=+0.015452422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:32:45 np0005591760 systemd[1]: libpod-conmon-139ccbc9fd863bdc1fdf9252b680982a5f1b7c48f71df83bbe8b37505c4a0758.scope: Deactivated successfully.
Jan 22 04:32:45 np0005591760 python3[76787]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.160854605 +0000 UTC m=+0.026958966 container create f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e (image=quay.io/ceph/ceph:v19, name=determined_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:32:45 np0005591760 systemd[1]: Started libpod-conmon-f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e.scope.
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.199733631 +0000 UTC m=+0.027356735 container create 6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_cartwright, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6170eddaf8a1d8826a36c7d75b0a26ca5ee1edb1d0c1853d884e699130cc6005/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6170eddaf8a1d8826a36c7d75b0a26ca5ee1edb1d0c1853d884e699130cc6005/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.211833793 +0000 UTC m=+0.077938154 container init f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e (image=quay.io/ceph/ceph:v19, name=determined_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:45 np0005591760 systemd[1]: Started libpod-conmon-6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979.scope.
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.21650575 +0000 UTC m=+0.082610101 container start f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e (image=quay.io/ceph/ceph:v19, name=determined_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.218104455 +0000 UTC m=+0.084208805 container attach f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e (image=quay.io/ceph/ceph:v19, name=determined_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ee5cce1581b26a93246fb5332711d35949695d0bdceaaae34f7c20ddc419d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ee5cce1581b26a93246fb5332711d35949695d0bdceaaae34f7c20ddc419d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ee5cce1581b26a93246fb5332711d35949695d0bdceaaae34f7c20ddc419d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ee5cce1581b26a93246fb5332711d35949695d0bdceaaae34f7c20ddc419d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.149809141 +0000 UTC m=+0.015913522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.258566695 +0000 UTC m=+0.086189819 container init 6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_cartwright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.262967291 +0000 UTC m=+0.090590405 container start 6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.264263575 +0000 UTC m=+0.091886680 container attach 6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.18922534 +0000 UTC m=+0.016848464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4294684040' entity='client.admin' 
Jan 22 04:32:45 np0005591760 systemd[1]: libpod-f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e.scope: Deactivated successfully.
Jan 22 04:32:45 np0005591760 conmon[76833]: conmon f3b56b1277c659f63d10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e.scope/container/memory.events
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.491843097 +0000 UTC m=+0.357947448 container died f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e (image=quay.io/ceph/ceph:v19, name=determined_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:45 np0005591760 podman[76808]: 2026-01-22 09:32:45.509513259 +0000 UTC m=+0.375617610 container remove f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e (image=quay.io/ceph/ceph:v19, name=determined_darwin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:45 np0005591760 systemd[1]: libpod-conmon-f3b56b1277c659f63d1044e770187dbd115e2ce0bfa795934bce01180bfbd49e.scope: Deactivated successfully.
Jan 22 04:32:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6170eddaf8a1d8826a36c7d75b0a26ca5ee1edb1d0c1853d884e699130cc6005-merged.mount: Deactivated successfully.
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]: [
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:    {
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "available": false,
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "being_replaced": false,
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "ceph_device_lvm": false,
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "lsm_data": {},
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "lvs": [],
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "path": "/dev/sr0",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "rejected_reasons": [
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "Insufficient space (<5GB)",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "Has a FileSystem"
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        ],
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        "sys_api": {
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "actuators": null,
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "device_nodes": [
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:                "sr0"
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            ],
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "devname": "sr0",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "human_readable_size": "474.00 KB",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "id_bus": "ata",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "model": "QEMU DVD-ROM",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "nr_requests": "64",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "parent": "/dev/sr0",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "partitions": {},
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "path": "/dev/sr0",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "removable": "1",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "rev": "2.5+",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "ro": "0",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "rotational": "1",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "sas_address": "",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "sas_device_handle": "",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "scheduler_mode": "mq-deadline",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "sectors": 0,
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "sectorsize": "2048",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "size": 485376.0,
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "support_discard": "2048",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "type": "disk",
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:            "vendor": "QEMU"
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:        }
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]:    }
Jan 22 04:32:45 np0005591760 interesting_cartwright[76842]: ]
Jan 22 04:32:45 np0005591760 systemd[1]: libpod-6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979.scope: Deactivated successfully.
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.80111632 +0000 UTC m=+0.628739424 container died 6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:32:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c24ee5cce1581b26a93246fb5332711d35949695d0bdceaaae34f7c20ddc419d-merged.mount: Deactivated successfully.
Jan 22 04:32:45 np0005591760 podman[76823]: 2026-01-22 09:32:45.828552334 +0000 UTC m=+0.656175437 container remove 6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_cartwright, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:45 np0005591760 systemd[1]: libpod-conmon-6823c95fdcba0b1acec2f06ddbbcad27fd0e0fefeb9f1ff05d263e3709585979.scope: Deactivated successfully.
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:32:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:45 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:32:45 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:32:46 np0005591760 ansible-async_wrapper.py[78207]: Invoked with j963949913238 30 /home/zuul/.ansible/tmp/ansible-tmp-1769074365.7800434-37595-52495466465295/AnsiballZ_command.py _
Jan 22 04:32:46 np0005591760 ansible-async_wrapper.py[78284]: Starting module and watcher
Jan 22 04:32:46 np0005591760 ansible-async_wrapper.py[78284]: Start watching 78286 (30)
Jan 22 04:32:46 np0005591760 ansible-async_wrapper.py[78286]: Start module (78286)
Jan 22 04:32:46 np0005591760 ansible-async_wrapper.py[78207]: Return async_wrapper task started.
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:32:46 np0005591760 python3[78287]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.404184638 +0000 UTC m=+0.027443579 container create 778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3 (image=quay.io/ceph/ceph:v19, name=dreamy_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:32:46 np0005591760 systemd[1]: Started libpod-conmon-778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3.scope.
Jan 22 04:32:46 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6b6b59f885fe9bfddb2f2b96426b0aa9c40cead460865b56708b57364b1bee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6b6b59f885fe9bfddb2f2b96426b0aa9c40cead460865b56708b57364b1bee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.465539036 +0000 UTC m=+0.088797967 container init 778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3 (image=quay.io/ceph/ceph:v19, name=dreamy_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.47237887 +0000 UTC m=+0.095637801 container start 778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3 (image=quay.io/ceph/ceph:v19, name=dreamy_shamir, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.474991516 +0000 UTC m=+0.098250447 container attach 778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3 (image=quay.io/ceph/ceph:v19, name=dreamy_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/4294684040' entity='client.admin' 
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:32:46 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.393799388 +0000 UTC m=+0.017058340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 04:32:46 np0005591760 dreamy_shamir[78405]: 
Jan 22 04:32:46 np0005591760 dreamy_shamir[78405]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 04:32:46 np0005591760 systemd[1]: libpod-778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3.scope: Deactivated successfully.
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.765182876 +0000 UTC m=+0.388441807 container died 778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3 (image=quay.io/ceph/ceph:v19, name=dreamy_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:46 np0005591760 systemd[1]: var-lib-containers-storage-overlay-dd6b6b59f885fe9bfddb2f2b96426b0aa9c40cead460865b56708b57364b1bee-merged.mount: Deactivated successfully.
Jan 22 04:32:46 np0005591760 podman[78361]: 2026-01-22 09:32:46.783414357 +0000 UTC m=+0.406673277 container remove 778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3 (image=quay.io/ceph/ceph:v19, name=dreamy_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:32:46 np0005591760 systemd[1]: libpod-conmon-778ca31fbcf55e458b178c4d5ea8c2f9b02274bf1c238b26d7745d836d67c2b3.scope: Deactivated successfully.
Jan 22 04:32:46 np0005591760 ansible-async_wrapper.py[78286]: Module complete (78286)
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:32:46 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:47 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev c386956e-7c4f-4824-978d-2b2310ff94d5 (Updating crash deployment (+1 -> 1))
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:47 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 22 04:32:47 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:32:47 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 04:32:47 np0005591760 python3[79101]: ansible-ansible.legacy.async_status Invoked with jid=j963949913238.78207 mode=status _async_dir=/root/.ansible_async
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.717172011 +0000 UTC m=+0.030017991 container create 7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:32:47 np0005591760 systemd[1]: Started libpod-conmon-7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7.scope.
Jan 22 04:32:47 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.76195568 +0000 UTC m=+0.074801660 container init 7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kalam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.766733977 +0000 UTC m=+0.079579957 container start 7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.768913917 +0000 UTC m=+0.081759917 container attach 7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:32:47 np0005591760 hardcore_kalam[79197]: 167 167
Jan 22 04:32:47 np0005591760 systemd[1]: libpod-7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7.scope: Deactivated successfully.
Jan 22 04:32:47 np0005591760 conmon[79197]: conmon 7a4b6d29d15072754c26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7.scope/container/memory.events
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.769940011 +0000 UTC m=+0.082785991 container died 7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 04:32:47 np0005591760 python3[79181]: ansible-ansible.legacy.async_status Invoked with jid=j963949913238.78207 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 04:32:47 np0005591760 systemd[1]: var-lib-containers-storage-overlay-aad1b62673e59150b23787c80e8c95f7d714c00330ead4ae249a6be11bd35a1d-merged.mount: Deactivated successfully.
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.793358782 +0000 UTC m=+0.106204762 container remove 7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:32:47 np0005591760 podman[79184]: 2026-01-22 09:32:47.705465332 +0000 UTC m=+0.018311312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:32:47 np0005591760 systemd[1]: libpod-conmon-7a4b6d29d15072754c26c4fff28737543af7715d705153c988bd47361dc7c1b7.scope: Deactivated successfully.
Jan 22 04:32:47 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:47 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:47 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:48 np0005591760 systemd[1]: Reloading.
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:32:48 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:32:48 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:32:48 np0005591760 systemd[1]: Starting Ceph crash.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:32:48 np0005591760 python3[79315]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 04:32:48 np0005591760 podman[79357]: 2026-01-22 09:32:48.37852514 +0000 UTC m=+0.027864523 container create 3a09c1a59b9ababc53577d069e4bbef3d58aad1559d90ad0628cb9fed6f5bbf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b12b8db0b9979c6db8c236909c6ebafc630619f96a08220f689063777d84f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b12b8db0b9979c6db8c236909c6ebafc630619f96a08220f689063777d84f2/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b12b8db0b9979c6db8c236909c6ebafc630619f96a08220f689063777d84f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b12b8db0b9979c6db8c236909c6ebafc630619f96a08220f689063777d84f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 podman[79357]: 2026-01-22 09:32:48.422910126 +0000 UTC m=+0.072249519 container init 3a09c1a59b9ababc53577d069e4bbef3d58aad1559d90ad0628cb9fed6f5bbf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:32:48 np0005591760 podman[79357]: 2026-01-22 09:32:48.426606855 +0000 UTC m=+0.075946238 container start 3a09c1a59b9ababc53577d069e4bbef3d58aad1559d90ad0628cb9fed6f5bbf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:48 np0005591760 bash[79357]: 3a09c1a59b9ababc53577d069e4bbef3d58aad1559d90ad0628cb9fed6f5bbf6
Jan 22 04:32:48 np0005591760 podman[79357]: 2026-01-22 09:32:48.367484926 +0000 UTC m=+0.016824329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:32:48 np0005591760 systemd[1]: Started Ceph crash.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev c386956e-7c4f-4824-978d-2b2310ff94d5 (Updating crash deployment (+1 -> 1))
Jan 22 04:32:48 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event c386956e-7c4f-4824-978d-2b2310ff94d5 (Updating crash deployment (+1 -> 1)) in 1 seconds
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: Deploying daemon crash.compute-0 on compute-0
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: 2026-01-22T09:32:48.553+0000 7f3063177640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: 2026-01-22T09:32:48.553+0000 7f3063177640 -1 AuthRegistry(0x7f305c069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: 2026-01-22T09:32:48.553+0000 7f3063177640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: 2026-01-22T09:32:48.553+0000 7f3063177640 -1 AuthRegistry(0x7f3063175ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: 2026-01-22T09:32:48.554+0000 7f3060eec640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: 2026-01-22T09:32:48.554+0000 7f3063177640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 22 04:32:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 22 04:32:48 np0005591760 python3[79473]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:48 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:48 np0005591760 podman[79487]: 2026-01-22 09:32:48.738516809 +0000 UTC m=+0.029794290 container create 3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5 (image=quay.io/ceph/ceph:v19, name=gallant_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 22 04:32:48 np0005591760 systemd[1]: Started libpod-conmon-3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5.scope.
Jan 22 04:32:48 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede8b8e4cc2b3985f6efb9e5e0c98c86578f734d939088b246bdb9129e77555b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede8b8e4cc2b3985f6efb9e5e0c98c86578f734d939088b246bdb9129e77555b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede8b8e4cc2b3985f6efb9e5e0c98c86578f734d939088b246bdb9129e77555b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:48 np0005591760 podman[79487]: 2026-01-22 09:32:48.791432108 +0000 UTC m=+0.082709599 container init 3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5 (image=quay.io/ceph/ceph:v19, name=gallant_kilby, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:32:48 np0005591760 podman[79487]: 2026-01-22 09:32:48.796264558 +0000 UTC m=+0.087542039 container start 3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5 (image=quay.io/ceph/ceph:v19, name=gallant_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:48 np0005591760 podman[79487]: 2026-01-22 09:32:48.797550052 +0000 UTC m=+0.088827533 container attach 3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5 (image=quay.io/ceph/ceph:v19, name=gallant_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:48 np0005591760 podman[79487]: 2026-01-22 09:32:48.727609276 +0000 UTC m=+0.018886777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:49 np0005591760 podman[79578]: 2026-01-22 09:32:49.001425726 +0000 UTC m=+0.038094379 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 04:32:49 np0005591760 gallant_kilby[79506]: 
Jan 22 04:32:49 np0005591760 gallant_kilby[79506]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 04:32:49 np0005591760 systemd[1]: libpod-3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5.scope: Deactivated successfully.
Jan 22 04:32:49 np0005591760 conmon[79506]: conmon 3beb671fe9e006c0f1d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5.scope/container/memory.events
Jan 22 04:32:49 np0005591760 podman[79487]: 2026-01-22 09:32:49.080497253 +0000 UTC m=+0.371774735 container died 3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5 (image=quay.io/ceph/ceph:v19, name=gallant_kilby, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:32:49 np0005591760 podman[79578]: 2026-01-22 09:32:49.082042557 +0000 UTC m=+0.118711210 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:32:49 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ede8b8e4cc2b3985f6efb9e5e0c98c86578f734d939088b246bdb9129e77555b-merged.mount: Deactivated successfully.
Jan 22 04:32:49 np0005591760 podman[79487]: 2026-01-22 09:32:49.10325617 +0000 UTC m=+0.394533651 container remove 3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5 (image=quay.io/ceph/ceph:v19, name=gallant_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:32:49 np0005591760 systemd[1]: libpod-conmon-3beb671fe9e006c0f1d265378746cab67978bfa4991a07a51e75c430437ca7d5.scope: Deactivated successfully.
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 04:32:49 np0005591760 python3[79709]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.494901735 +0000 UTC m=+0.029173059 container create 57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d (image=quay.io/ceph/ceph:v19, name=romantic_mclean, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:32:49 np0005591760 systemd[1]: Started libpod-conmon-57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d.scope.
Jan 22 04:32:49 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326cdcd7a0c2b832763d4bc36a8cb9acbae50a6fbf101771346b18ac2499735e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326cdcd7a0c2b832763d4bc36a8cb9acbae50a6fbf101771346b18ac2499735e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326cdcd7a0c2b832763d4bc36a8cb9acbae50a6fbf101771346b18ac2499735e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.554128692 +0000 UTC m=+0.088400015 container init 57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d (image=quay.io/ceph/ceph:v19, name=romantic_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.55988805 +0000 UTC m=+0.094159374 container start 57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d (image=quay.io/ceph/ceph:v19, name=romantic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.561050151 +0000 UTC m=+0.095321475 container attach 57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d (image=quay.io/ceph/ceph:v19, name=romantic_mclean, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.482863209 +0000 UTC m=+0.017134554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.642431614 +0000 UTC m=+0.028336111 container create 6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc (image=quay.io/ceph/ceph:v19, name=beautiful_kapitsa, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:49 np0005591760 systemd[1]: Started libpod-conmon-6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc.scope.
Jan 22 04:32:49 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.696456394 +0000 UTC m=+0.082360891 container init 6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc (image=quay.io/ceph/ceph:v19, name=beautiful_kapitsa, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.701184698 +0000 UTC m=+0.087089185 container start 6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc (image=quay.io/ceph/ceph:v19, name=beautiful_kapitsa, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.703738473 +0000 UTC m=+0.089642980 container attach 6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc (image=quay.io/ceph/ceph:v19, name=beautiful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:49 np0005591760 beautiful_kapitsa[79806]: 167 167
Jan 22 04:32:49 np0005591760 systemd[1]: libpod-6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc.scope: Deactivated successfully.
Jan 22 04:32:49 np0005591760 conmon[79806]: conmon 6bc8e98086c2437f77e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc.scope/container/memory.events
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.705245893 +0000 UTC m=+0.091150381 container died 6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc (image=quay.io/ceph/ceph:v19, name=beautiful_kapitsa, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:49 np0005591760 systemd[1]: var-lib-containers-storage-overlay-296df6f80c0d17859c6a641960483cf6037fac62061a0354e8dc68524f8c5700-merged.mount: Deactivated successfully.
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.722525971 +0000 UTC m=+0.108430459 container remove 6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc (image=quay.io/ceph/ceph:v19, name=beautiful_kapitsa, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:49 np0005591760 podman[79774]: 2026-01-22 09:32:49.631334043 +0000 UTC m=+0.017238540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:49 np0005591760 systemd[1]: libpod-conmon-6bc8e98086c2437f77e193e88d17785db838069dc696fc4a77c11df819dbd6fc.scope: Deactivated successfully.
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rfmoog (unknown last config time)...
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rfmoog (unknown last config time)...
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rfmoog", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rfmoog", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rfmoog on compute-0
Jan 22 04:32:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rfmoog on compute-0
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 22 04:32:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2503013057' entity='client.admin' 
Jan 22 04:32:49 np0005591760 systemd[1]: libpod-57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d.scope: Deactivated successfully.
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.842063688 +0000 UTC m=+0.376335023 container died 57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d (image=quay.io/ceph/ceph:v19, name=romantic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:49 np0005591760 systemd[1]: var-lib-containers-storage-overlay-326cdcd7a0c2b832763d4bc36a8cb9acbae50a6fbf101771346b18ac2499735e-merged.mount: Deactivated successfully.
Jan 22 04:32:49 np0005591760 podman[79743]: 2026-01-22 09:32:49.864335757 +0000 UTC m=+0.398607081 container remove 57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d (image=quay.io/ceph/ceph:v19, name=romantic_mclean, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:32:49 np0005591760 systemd[1]: libpod-conmon-57456dafc781dc8efe9c65a8044f655ab6e61f6d01997a7e5277d35b4956d67d.scope: Deactivated successfully.
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.091563634 +0000 UTC m=+0.030517583 container create e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678 (image=quay.io/ceph/ceph:v19, name=hopeful_mestorf, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:32:50 np0005591760 python3[79906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:50 np0005591760 systemd[1]: Started libpod-conmon-e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678.scope.
Jan 22 04:32:50 np0005591760 podman[79931]: 2026-01-22 09:32:50.134799956 +0000 UTC m=+0.029276676 container create 2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d (image=quay.io/ceph/ceph:v19, name=beautiful_cartwright, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:32:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.145971618 +0000 UTC m=+0.084925567 container init e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678 (image=quay.io/ceph/ceph:v19, name=hopeful_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:32:50 np0005591760 systemd[1]: Started libpod-conmon-2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d.scope.
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.15418935 +0000 UTC m=+0.093143288 container start e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678 (image=quay.io/ceph/ceph:v19, name=hopeful_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.155257012 +0000 UTC m=+0.094210951 container attach e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678 (image=quay.io/ceph/ceph:v19, name=hopeful_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:32:50 np0005591760 hopeful_mestorf[79941]: 167 167
Jan 22 04:32:50 np0005591760 systemd[1]: libpod-e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678.scope: Deactivated successfully.
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.159813341 +0000 UTC m=+0.098767290 container died e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678 (image=quay.io/ceph/ceph:v19, name=hopeful_mestorf, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:32:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfed17009b83dacef1a67536206016360d1b7b90901a0c8d5a058e15d38e93/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfed17009b83dacef1a67536206016360d1b7b90901a0c8d5a058e15d38e93/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfed17009b83dacef1a67536206016360d1b7b90901a0c8d5a058e15d38e93/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.080632908 +0000 UTC m=+0.019586857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:50 np0005591760 podman[79931]: 2026-01-22 09:32:50.178163716 +0000 UTC m=+0.072640456 container init 2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d (image=quay.io/ceph/ceph:v19, name=beautiful_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:32:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3a0f7e802fd30379d1fa91795162ce3c1909ac15e6c81b6bd765cc66fadad4f9-merged.mount: Deactivated successfully.
Jan 22 04:32:50 np0005591760 podman[79931]: 2026-01-22 09:32:50.182586785 +0000 UTC m=+0.077063505 container start 2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d (image=quay.io/ceph/ceph:v19, name=beautiful_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:50 np0005591760 podman[79921]: 2026-01-22 09:32:50.187237112 +0000 UTC m=+0.126191051 container remove e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678 (image=quay.io/ceph/ceph:v19, name=hopeful_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:50 np0005591760 podman[79931]: 2026-01-22 09:32:50.192018175 +0000 UTC m=+0.086494916 container attach 2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d (image=quay.io/ceph/ceph:v19, name=beautiful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:32:50 np0005591760 systemd[1]: libpod-conmon-e94745c490b71ddc15bf2ba49a8e750e398a08007b9a29a8bdd68caa3439d678.scope: Deactivated successfully.
Jan 22 04:32:50 np0005591760 podman[79931]: 2026-01-22 09:32:50.124208497 +0000 UTC m=+0.018685238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3460691081' entity='client.admin' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 systemd[1]: libpod-2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d.scope: Deactivated successfully.
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rfmoog", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2503013057' entity='client.admin' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3460691081' entity='client.admin' 
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:50 np0005591760 podman[80011]: 2026-01-22 09:32:50.495475044 +0000 UTC m=+0.017065694 container died 2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d (image=quay.io/ceph/ceph:v19, name=beautiful_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:32:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3fcfed17009b83dacef1a67536206016360d1b7b90901a0c8d5a058e15d38e93-merged.mount: Deactivated successfully.
Jan 22 04:32:50 np0005591760 podman[80011]: 2026-01-22 09:32:50.514215914 +0000 UTC m=+0.035806564 container remove 2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d (image=quay.io/ceph/ceph:v19, name=beautiful_cartwright, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:50 np0005591760 systemd[1]: libpod-conmon-2b42eac15f2e0c61df69ee0f52468bf8a2cf7cbbdd7625f17a40a9bf1242314d.scope: Deactivated successfully.
Jan 22 04:32:50 np0005591760 ceph-mgr[74522]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 22 04:32:50 np0005591760 python3[80072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:50 np0005591760 podman[80073]: 2026-01-22 09:32:50.814313109 +0000 UTC m=+0.027923783 container create 4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5 (image=quay.io/ceph/ceph:v19, name=angry_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:50 np0005591760 systemd[1]: Started libpod-conmon-4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5.scope.
Jan 22 04:32:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0984ebc9c0bec94a497cb280acacd7db65cbf504ecdad6c8ebafaa43aab4a3bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0984ebc9c0bec94a497cb280acacd7db65cbf504ecdad6c8ebafaa43aab4a3bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0984ebc9c0bec94a497cb280acacd7db65cbf504ecdad6c8ebafaa43aab4a3bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:50 np0005591760 podman[80073]: 2026-01-22 09:32:50.869807439 +0000 UTC m=+0.083418114 container init 4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5 (image=quay.io/ceph/ceph:v19, name=angry_feistel, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:32:50 np0005591760 podman[80073]: 2026-01-22 09:32:50.874851729 +0000 UTC m=+0.088462403 container start 4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5 (image=quay.io/ceph/ceph:v19, name=angry_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:32:50 np0005591760 podman[80073]: 2026-01-22 09:32:50.876084533 +0000 UTC m=+0.089695207 container attach 4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5 (image=quay.io/ceph/ceph:v19, name=angry_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:32:50 np0005591760 podman[80073]: 2026-01-22 09:32:50.80227766 +0000 UTC m=+0.015888354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/943079522' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 22 04:32:51 np0005591760 ansible-async_wrapper.py[78284]: Done in kid B.
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: Reconfiguring mgr.compute-0.rfmoog (unknown last config time)...
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: Reconfiguring daemon mgr.compute-0.rfmoog on compute-0
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/943079522' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/943079522' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 22 04:32:51 np0005591760 angry_feistel[80085]: set require_min_compat_client to mimic
Jan 22 04:32:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 22 04:32:51 np0005591760 systemd[1]: libpod-4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5.scope: Deactivated successfully.
Jan 22 04:32:51 np0005591760 podman[80110]: 2026-01-22 09:32:51.527392674 +0000 UTC m=+0.017180080 container died 4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5 (image=quay.io/ceph/ceph:v19, name=angry_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:32:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0984ebc9c0bec94a497cb280acacd7db65cbf504ecdad6c8ebafaa43aab4a3bc-merged.mount: Deactivated successfully.
Jan 22 04:32:51 np0005591760 podman[80110]: 2026-01-22 09:32:51.544494045 +0000 UTC m=+0.034281450 container remove 4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5 (image=quay.io/ceph/ceph:v19, name=angry_feistel, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:32:51 np0005591760 systemd[1]: libpod-conmon-4942bce681632c83b65e7e48a1a73a209d03625aca2b98dc9e6a61ccd7927de5.scope: Deactivated successfully.
Jan 22 04:32:51 np0005591760 python3[80146]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:32:52 np0005591760 podman[80147]: 2026-01-22 09:32:52.035183861 +0000 UTC m=+0.028044141 container create c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444 (image=quay.io/ceph/ceph:v19, name=fervent_elgamal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:32:52 np0005591760 systemd[1]: Started libpod-conmon-c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444.scope.
Jan 22 04:32:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:32:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a45fbd23f12c07819a4de3beb70a9f937232b7b652c62973b72a028e88cf7e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a45fbd23f12c07819a4de3beb70a9f937232b7b652c62973b72a028e88cf7e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a45fbd23f12c07819a4de3beb70a9f937232b7b652c62973b72a028e88cf7e3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:32:52 np0005591760 podman[80147]: 2026-01-22 09:32:52.090699672 +0000 UTC m=+0.083559952 container init c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444 (image=quay.io/ceph/ceph:v19, name=fervent_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:32:52 np0005591760 podman[80147]: 2026-01-22 09:32:52.095134382 +0000 UTC m=+0.087994663 container start c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444 (image=quay.io/ceph/ceph:v19, name=fervent_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:32:52 np0005591760 podman[80147]: 2026-01-22 09:32:52.096429815 +0000 UTC m=+0.089290094 container attach c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444 (image=quay.io/ceph/ceph:v19, name=fervent_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:32:52 np0005591760 podman[80147]: 2026-01-22 09:32:52.023954952 +0000 UTC m=+0.016815232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:32:52 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/943079522' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:52 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Added host compute-0
Jan 22 04:32:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:52 np0005591760 ceph-mgr[74522]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 22 04:32:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 22 04:32:52 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 1 completed events
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:32:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 22 04:32:53 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 22 04:32:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 22 04:32:54 np0005591760 ceph-mon[74254]: Added host compute-0
Jan 22 04:32:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:32:55 np0005591760 ceph-mon[74254]: Deploying cephadm binary to compute-1
Jan 22 04:32:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:32:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:56 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Added host compute-1
Jan 22 04:32:56 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 22 04:32:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:32:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:32:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:57 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 22 04:32:57 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: Added host compute-1
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:32:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:32:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:32:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:32:58 np0005591760 ceph-mon[74254]: Deploying cephadm binary to compute-2
Jan 22 04:32:58 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Added host compute-2
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 22 04:33:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:00 np0005591760 fervent_elgamal[80160]: Added host 'compute-0' with addr '192.168.122.100'
Jan 22 04:33:00 np0005591760 fervent_elgamal[80160]: Added host 'compute-1' with addr '192.168.122.101'
Jan 22 04:33:00 np0005591760 fervent_elgamal[80160]: Added host 'compute-2' with addr '192.168.122.102'
Jan 22 04:33:00 np0005591760 fervent_elgamal[80160]: Scheduled mon update...
Jan 22 04:33:00 np0005591760 fervent_elgamal[80160]: Scheduled mgr update...
Jan 22 04:33:00 np0005591760 fervent_elgamal[80160]: Scheduled osd.default_drive_group update...
Jan 22 04:33:00 np0005591760 systemd[1]: libpod-c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444.scope: Deactivated successfully.
Jan 22 04:33:00 np0005591760 conmon[80160]: conmon c2f112f6b57cabb96f97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444.scope/container/memory.events
Jan 22 04:33:00 np0005591760 podman[80147]: 2026-01-22 09:33:00.456564278 +0000 UTC m=+8.449424558 container died c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444 (image=quay.io/ceph/ceph:v19, name=fervent_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9a45fbd23f12c07819a4de3beb70a9f937232b7b652c62973b72a028e88cf7e3-merged.mount: Deactivated successfully.
Jan 22 04:33:00 np0005591760 podman[80147]: 2026-01-22 09:33:00.476161013 +0000 UTC m=+8.469021293 container remove c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444 (image=quay.io/ceph/ceph:v19, name=fervent_elgamal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 04:33:00 np0005591760 systemd[1]: libpod-conmon-c2f112f6b57cabb96f97c3b77052f16585e3a74203e71f8a209b07ed63da8444.scope: Deactivated successfully.
Jan 22 04:33:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:00 np0005591760 python3[80314]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:33:00 np0005591760 podman[80316]: 2026-01-22 09:33:00.844834239 +0000 UTC m=+0.031891483 container create e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3 (image=quay.io/ceph/ceph:v19, name=gallant_burnell, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:33:00 np0005591760 systemd[1]: Started libpod-conmon-e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3.scope.
Jan 22 04:33:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47c9fe67422fde058081cbb7eb1fb8d98ef48dae5e2bb000b06e8baf12b2e64/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47c9fe67422fde058081cbb7eb1fb8d98ef48dae5e2bb000b06e8baf12b2e64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47c9fe67422fde058081cbb7eb1fb8d98ef48dae5e2bb000b06e8baf12b2e64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:00 np0005591760 podman[80316]: 2026-01-22 09:33:00.910338301 +0000 UTC m=+0.097395564 container init e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3 (image=quay.io/ceph/ceph:v19, name=gallant_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:33:00 np0005591760 podman[80316]: 2026-01-22 09:33:00.91446495 +0000 UTC m=+0.101522194 container start e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3 (image=quay.io/ceph/ceph:v19, name=gallant_burnell, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:00 np0005591760 podman[80316]: 2026-01-22 09:33:00.915600671 +0000 UTC m=+0.102657915 container attach e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3 (image=quay.io/ceph/ceph:v19, name=gallant_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:33:00 np0005591760 podman[80316]: 2026-01-22 09:33:00.832447317 +0000 UTC m=+0.019504582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:33:01 np0005591760 gallant_burnell[80329]: 
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 22 04:33:01 np0005591760 gallant_burnell[80329]: {"fsid":"43df7a30-cf5f-5209-adfd-bf44298b19f2","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":43,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-22T09:32:16:777810+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-22T09:32:16.778638+0000","services":{}},"progress_events":{}}
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412042926' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 04:33:01 np0005591760 systemd[1]: libpod-e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3.scope: Deactivated successfully.
Jan 22 04:33:01 np0005591760 podman[80316]: 2026-01-22 09:33:01.316101571 +0000 UTC m=+0.503158815 container died e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3 (image=quay.io/ceph/ceph:v19, name=gallant_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:33:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c47c9fe67422fde058081cbb7eb1fb8d98ef48dae5e2bb000b06e8baf12b2e64-merged.mount: Deactivated successfully.
Jan 22 04:33:01 np0005591760 podman[80316]: 2026-01-22 09:33:01.335649373 +0000 UTC m=+0.522706617 container remove e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3 (image=quay.io/ceph/ceph:v19, name=gallant_burnell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:01 np0005591760 systemd[1]: libpod-conmon-e716dce0217029bb84fdb28649ad7bfe70019213b1c8ae086e8b408658b737f3.scope: Deactivated successfully.
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: Added host compute-2
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 22 04:33:01 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:33:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:33:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:33:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:33:15 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:33:15 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:33:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:33:16 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:33:17.096+0000 7f7ebc5c7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: service_name: mon
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: placement:
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  hosts:
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-0
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-1
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-2
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:33:17.096+0000 7f7ebc5c7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: service_name: mgr
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: placement:
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  hosts:
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-0
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-1
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-2
Jan 22 04:33:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 9562b70f-11ae-4f8f-aa4d-6bde4fca48ec (Updating crash deployment (+1 -> 2))
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 22 04:33:17 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:33:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 04:33:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:18 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:33:18 np0005591760 ceph-mon[74254]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 22 04:33:18 np0005591760 ceph-mon[74254]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 22 04:33:18 np0005591760 ceph-mon[74254]: Deploying daemon crash.compute-1 on compute-1
Jan 22 04:33:18 np0005591760 ceph-mon[74254]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 22 04:33:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:19 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 9562b70f-11ae-4f8f-aa4d-6bde4fca48ec (Updating crash deployment (+1 -> 2))
Jan 22 04:33:19 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 9562b70f-11ae-4f8f-aa4d-6bde4fca48ec (Updating crash deployment (+1 -> 2)) in 2 seconds
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.578166449 +0000 UTC m=+0.024756993 container create 3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dhawan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:19 np0005591760 systemd[1]: Started libpod-conmon-3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d.scope.
Jan 22 04:33:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.624827517 +0000 UTC m=+0.071418071 container init 3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dhawan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.629065566 +0000 UTC m=+0.075656110 container start 3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.630258806 +0000 UTC m=+0.076849350 container attach 3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dhawan, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:33:19 np0005591760 pensive_dhawan[80458]: 167 167
Jan 22 04:33:19 np0005591760 systemd[1]: libpod-3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d.scope: Deactivated successfully.
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.632428787 +0000 UTC m=+0.079019331 container died 3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:33:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5de50763b6fc36a6b89b344e6ce8f93b1a618ddc654056a51248d87b68478390-merged.mount: Deactivated successfully.
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.647654741 +0000 UTC m=+0.094245285 container remove 3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dhawan, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:19 np0005591760 podman[80445]: 2026-01-22 09:33:19.568617237 +0000 UTC m=+0.015207801 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:19 np0005591760 systemd[1]: libpod-conmon-3f6efcc4b44df206878eb41f2880407d0757f0b83f5ebeaf1adeb93c2904308d.scope: Deactivated successfully.
Jan 22 04:33:19 np0005591760 podman[80480]: 2026-01-22 09:33:19.758152466 +0000 UTC m=+0.026384553 container create ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jennings, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:33:19 np0005591760 systemd[1]: Started libpod-conmon-ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9.scope.
Jan 22 04:33:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296e57ae5e707b4e5329e5807ca2a27605832833bfdafa89ddb4794dcfbbf87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296e57ae5e707b4e5329e5807ca2a27605832833bfdafa89ddb4794dcfbbf87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296e57ae5e707b4e5329e5807ca2a27605832833bfdafa89ddb4794dcfbbf87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296e57ae5e707b4e5329e5807ca2a27605832833bfdafa89ddb4794dcfbbf87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296e57ae5e707b4e5329e5807ca2a27605832833bfdafa89ddb4794dcfbbf87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:19 np0005591760 podman[80480]: 2026-01-22 09:33:19.815529887 +0000 UTC m=+0.083761974 container init ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:33:19 np0005591760 podman[80480]: 2026-01-22 09:33:19.82018918 +0000 UTC m=+0.088421257 container start ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:33:19 np0005591760 podman[80480]: 2026-01-22 09:33:19.823434699 +0000 UTC m=+0.091666776 container attach ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:33:19 np0005591760 podman[80480]: 2026-01-22 09:33:19.747459136 +0000 UTC m=+0.015691223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 64f30ffd-1e43-4897-997f-ebad3f519f02
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "64f30ffd-1e43-4897-997f-ebad3f519f02"} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1152048619' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "64f30ffd-1e43-4897-997f-ebad3f519f02"}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1152048619' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "64f30ffd-1e43-4897-997f-ebad3f519f02"}]': finished
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ca0e1490-5ec5-41b2-9f5b-59dd9019d505"} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2263674412' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca0e1490-5ec5-41b2-9f5b-59dd9019d505"}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2263674412' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca0e1490-5ec5-41b2-9f5b-59dd9019d505"}]': finished
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:20 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:20 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 04:33:20 np0005591760 lvm[80554]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:33:20 np0005591760 lvm[80554]: VG ceph_vg0 finished
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/781361497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: stderr: got monmap epoch 1
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: --> Creating keyring file for osd.0
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 22 04:33:20 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 64f30ffd-1e43-4897-997f-ebad3f519f02 --setuser ceph --setgroup ceph
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 22 04:33:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2208973767' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 04:33:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:21 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1152048619' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "64f30ffd-1e43-4897-997f-ebad3f519f02"}]: dispatch
Jan 22 04:33:21 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1152048619' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "64f30ffd-1e43-4897-997f-ebad3f519f02"}]': finished
Jan 22 04:33:21 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.101:0/2263674412' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca0e1490-5ec5-41b2-9f5b-59dd9019d505"}]: dispatch
Jan 22 04:33:21 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.101:0/2263674412' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca0e1490-5ec5-41b2-9f5b-59dd9019d505"}]': finished
Jan 22 04:33:21 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 22 04:33:22 np0005591760 ceph-mon[74254]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 22 04:33:22 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 2 completed events
Jan 22 04:33:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:33:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: stderr: 2026-01-22T09:33:20.900+0000 7fd6abbb2740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: stderr: 2026-01-22T09:33:21.163+0000 7fd6abbb2740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 22 04:33:23 np0005591760 youthful_jennings[80493]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 22 04:33:23 np0005591760 systemd[1]: libpod-ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9.scope: Deactivated successfully.
Jan 22 04:33:23 np0005591760 systemd[1]: libpod-ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9.scope: Consumed 1.366s CPU time.
Jan 22 04:33:23 np0005591760 podman[80480]: 2026-01-22 09:33:23.707088551 +0000 UTC m=+3.975320628 container died ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:33:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f296e57ae5e707b4e5329e5807ca2a27605832833bfdafa89ddb4794dcfbbf87-merged.mount: Deactivated successfully.
Jan 22 04:33:23 np0005591760 podman[80480]: 2026-01-22 09:33:23.729305535 +0000 UTC m=+3.997537612 container remove ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jennings, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:33:23 np0005591760 systemd[1]: libpod-conmon-ba315523dae041c68ebcb90ce670d395aafb8be555582792e25387c7208184c9.scope: Deactivated successfully.
Jan 22 04:33:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.103613775 +0000 UTC m=+0.025891132 container create 4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mendeleev, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:33:24 np0005591760 systemd[1]: Started libpod-conmon-4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d.scope.
Jan 22 04:33:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.162237025 +0000 UTC m=+0.084514392 container init 4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.167257078 +0000 UTC m=+0.089534434 container start 4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.168509129 +0000 UTC m=+0.090786486 container attach 4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:33:24 np0005591760 relaxed_mendeleev[81586]: 167 167
Jan 22 04:33:24 np0005591760 systemd[1]: libpod-4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d.scope: Deactivated successfully.
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.171836722 +0000 UTC m=+0.094114089 container died 4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:33:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b9468bc08c27e2572f2a4276802d2ccaaa9799f7738c970231a8b8ca2ebd248c-merged.mount: Deactivated successfully.
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.092963737 +0000 UTC m=+0.015241094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:24 np0005591760 podman[81572]: 2026-01-22 09:33:24.191431503 +0000 UTC m=+0.113708860 container remove 4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:33:24 np0005591760 systemd[1]: libpod-conmon-4ed10b70e0d948989776332531eeababe2b10750217d08e144a077cbe8e2797d.scope: Deactivated successfully.
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.299386744 +0000 UTC m=+0.027114368 container create 32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:33:24 np0005591760 systemd[1]: Started libpod-conmon-32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13.scope.
Jan 22 04:33:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287dbe48d8bfc40b125fe06aef5eef3fe85a3d657101893ffbe1be09f9f8934e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287dbe48d8bfc40b125fe06aef5eef3fe85a3d657101893ffbe1be09f9f8934e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287dbe48d8bfc40b125fe06aef5eef3fe85a3d657101893ffbe1be09f9f8934e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287dbe48d8bfc40b125fe06aef5eef3fe85a3d657101893ffbe1be09f9f8934e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.350569435 +0000 UTC m=+0.078297069 container init 32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.356441315 +0000 UTC m=+0.084168939 container start 32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.358450433 +0000 UTC m=+0.086178067 container attach 32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_engelbart, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.288216615 +0000 UTC m=+0.015944239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]: {
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:    "0": [
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:        {
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "devices": [
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "/dev/loop3"
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            ],
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "lv_name": "ceph_lv0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "lv_size": "21470642176",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "name": "ceph_lv0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "tags": {
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.cluster_name": "ceph",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.crush_device_class": "",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.encrypted": "0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.osd_id": "0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.type": "block",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.vdo": "0",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:                "ceph.with_tpm": "0"
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            },
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "type": "block",
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:            "vg_name": "ceph_vg0"
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:        }
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]:    ]
Jan 22 04:33:24 np0005591760 suspicious_engelbart[81620]: }
Jan 22 04:33:24 np0005591760 systemd[1]: libpod-32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13.scope: Deactivated successfully.
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.584542921 +0000 UTC m=+0.312270544 container died 32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_engelbart, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:33:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-287dbe48d8bfc40b125fe06aef5eef3fe85a3d657101893ffbe1be09f9f8934e-merged.mount: Deactivated successfully.
Jan 22 04:33:24 np0005591760 podman[81607]: 2026-01-22 09:33:24.605544234 +0000 UTC m=+0.333271857 container remove 32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_engelbart, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:33:24 np0005591760 systemd[1]: libpod-conmon-32ee6ff8d1298d04c8910e1927f7fdfbf2f36b3ee3be66bf8060da259ba5fa13.scope: Deactivated successfully.
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:24 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 22 04:33:24 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:24 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 22 04:33:24 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 04:33:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:25.003380482 +0000 UTC m=+0.028283019 container create 84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_carson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:25 np0005591760 systemd[1]: Started libpod-conmon-84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2.scope.
Jan 22 04:33:25 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:25.058012325 +0000 UTC m=+0.082914872 container init 84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:25.06180279 +0000 UTC m=+0.086705317 container start 84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:25.064298387 +0000 UTC m=+0.089200905 container attach 84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_carson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:25 np0005591760 romantic_carson[81736]: 167 167
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:25.066582457 +0000 UTC m=+0.091484984 container died 84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Jan 22 04:33:25 np0005591760 systemd[1]: libpod-84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2.scope: Deactivated successfully.
Jan 22 04:33:25 np0005591760 systemd[1]: var-lib-containers-storage-overlay-311c35e20242e4669c6d5f95c3f9cd3eeb57f3e2631db58bcd9a895e6aa216e5-merged.mount: Deactivated successfully.
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:25.082933284 +0000 UTC m=+0.107835811 container remove 84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:33:25 np0005591760 podman[81723]: 2026-01-22 09:33:24.991342533 +0000 UTC m=+0.016245080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:25 np0005591760 systemd[1]: libpod-conmon-84f7dc07d6ae3c6c53e5c47977e555a46180bc75f88b8e8b6d423675f59d91b2.scope: Deactivated successfully.
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.255456485 +0000 UTC m=+0.028988917 container create 701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:33:25 np0005591760 systemd[1]: Started libpod-conmon-701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40.scope.
Jan 22 04:33:25 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fc348dc5db87e038426c8d097033d85f3df2794e6660bcf1376dd2d45880dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fc348dc5db87e038426c8d097033d85f3df2794e6660bcf1376dd2d45880dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fc348dc5db87e038426c8d097033d85f3df2794e6660bcf1376dd2d45880dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fc348dc5db87e038426c8d097033d85f3df2794e6660bcf1376dd2d45880dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fc348dc5db87e038426c8d097033d85f3df2794e6660bcf1376dd2d45880dc/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.314928459 +0000 UTC m=+0.088460892 container init 701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.322418528 +0000 UTC m=+0.095950960 container start 701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.323452634 +0000 UTC m=+0.096985066 container attach 701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.243599738 +0000 UTC m=+0.017132169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test[81777]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 22 04:33:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test[81777]:                            [--no-systemd] [--no-tmpfs]
Jan 22 04:33:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test[81777]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 22 04:33:25 np0005591760 systemd[1]: libpod-701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40.scope: Deactivated successfully.
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.473410471 +0000 UTC m=+0.246942913 container died 701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:25 np0005591760 systemd[1]: var-lib-containers-storage-overlay-63fc348dc5db87e038426c8d097033d85f3df2794e6660bcf1376dd2d45880dc-merged.mount: Deactivated successfully.
Jan 22 04:33:25 np0005591760 podman[81764]: 2026-01-22 09:33:25.493661079 +0000 UTC m=+0.267193511 container remove 701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:33:25 np0005591760 systemd[1]: libpod-conmon-701e702f1e8a63486cab5e2f87dbb1b2dc64fd4292cd1843c7b218f24b281a40.scope: Deactivated successfully.
Jan 22 04:33:25 np0005591760 systemd[1]: Reloading.
Jan 22 04:33:25 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:33:25 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:33:25 np0005591760 ceph-mon[74254]: Deploying daemon osd.0 on compute-0
Jan 22 04:33:25 np0005591760 ceph-mon[74254]: Deploying daemon osd.1 on compute-1
Jan 22 04:33:25 np0005591760 systemd[1]: Reloading.
Jan 22 04:33:25 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:33:25 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:33:26 np0005591760 systemd[1]: Starting Ceph osd.0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:33:26 np0005591760 podman[81925]: 2026-01-22 09:33:26.258076801 +0000 UTC m=+0.027120250 container create 5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:33:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e5a9c81ad233217817fcff86505ac587813be687c48dbbdffba576ab086e85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e5a9c81ad233217817fcff86505ac587813be687c48dbbdffba576ab086e85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e5a9c81ad233217817fcff86505ac587813be687c48dbbdffba576ab086e85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e5a9c81ad233217817fcff86505ac587813be687c48dbbdffba576ab086e85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e5a9c81ad233217817fcff86505ac587813be687c48dbbdffba576ab086e85/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:26 np0005591760 podman[81925]: 2026-01-22 09:33:26.305375061 +0000 UTC m=+0.074418529 container init 5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:33:26 np0005591760 podman[81925]: 2026-01-22 09:33:26.311514077 +0000 UTC m=+0.080557525 container start 5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:33:26 np0005591760 podman[81925]: 2026-01-22 09:33:26.312716931 +0000 UTC m=+0.081760379 container attach 5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:26 np0005591760 podman[81925]: 2026-01-22 09:33:26.24676571 +0000 UTC m=+0.015809169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 bash[81925]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 bash[81925]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 lvm[82019]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:33:26 np0005591760 lvm[82019]: VG ceph_vg0 finished
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 bash[81925]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 22 04:33:26 np0005591760 bash[81925]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 bash[81925]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 04:33:26 np0005591760 lvm[82023]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:33:26 np0005591760 lvm[82023]: VG ceph_vg0 finished
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 04:33:26 np0005591760 bash[81925]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 04:33:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 22 04:33:26 np0005591760 bash[81925]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 22 04:33:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:27 np0005591760 bash[81925]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:27 np0005591760 bash[81925]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 04:33:27 np0005591760 bash[81925]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 04:33:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 04:33:27 np0005591760 bash[81925]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 22 04:33:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate[81937]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 22 04:33:27 np0005591760 bash[81925]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 22 04:33:27 np0005591760 systemd[1]: libpod-5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964.scope: Deactivated successfully.
Jan 22 04:33:27 np0005591760 podman[81925]: 2026-01-22 09:33:27.274201066 +0000 UTC m=+1.043244514 container died 5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:33:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-83e5a9c81ad233217817fcff86505ac587813be687c48dbbdffba576ab086e85-merged.mount: Deactivated successfully.
Jan 22 04:33:27 np0005591760 podman[81925]: 2026-01-22 09:33:27.298111804 +0000 UTC m=+1.067155252 container remove 5f1c7255a903699110da92608c1e3a96d4c948b96becfbdbce2b594f3b7c9964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:33:27 np0005591760 podman[82169]: 2026-01-22 09:33:27.439515899 +0000 UTC m=+0.026238249 container create 66bd117e643bc864fedaad81014a2a06d9c7009650972e9a6eceaf3d07c8bf5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:33:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6fd9901aee13b851280f9fce01180eab42f2f749c1a47da29df7c65b79ff155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6fd9901aee13b851280f9fce01180eab42f2f749c1a47da29df7c65b79ff155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6fd9901aee13b851280f9fce01180eab42f2f749c1a47da29df7c65b79ff155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6fd9901aee13b851280f9fce01180eab42f2f749c1a47da29df7c65b79ff155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6fd9901aee13b851280f9fce01180eab42f2f749c1a47da29df7c65b79ff155/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:27 np0005591760 podman[82169]: 2026-01-22 09:33:27.483575793 +0000 UTC m=+0.070298163 container init 66bd117e643bc864fedaad81014a2a06d9c7009650972e9a6eceaf3d07c8bf5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:33:27 np0005591760 podman[82169]: 2026-01-22 09:33:27.488283644 +0000 UTC m=+0.075005995 container start 66bd117e643bc864fedaad81014a2a06d9c7009650972e9a6eceaf3d07c8bf5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:33:27 np0005591760 bash[82169]: 66bd117e643bc864fedaad81014a2a06d9c7009650972e9a6eceaf3d07c8bf5b
Jan 22 04:33:27 np0005591760 podman[82169]: 2026-01-22 09:33:27.42885375 +0000 UTC m=+0.015576120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:27 np0005591760 systemd[1]: Started Ceph osd.0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: pidfile_write: ignore empty --pid-file
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:33:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:27 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.897541543 +0000 UTC m=+0.031229204 container create 6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 22 04:33:27 np0005591760 systemd[1]: Started libpod-conmon-6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985.scope.
Jan 22 04:33:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.950113559 +0000 UTC m=+0.083801221 container init 6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.954313815 +0000 UTC m=+0.088001477 container start 6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.957298573 +0000 UTC m=+0.090986236 container attach 6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_dhawan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:33:27 np0005591760 nervous_dhawan[82295]: 167 167
Jan 22 04:33:27 np0005591760 systemd[1]: libpod-6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985.scope: Deactivated successfully.
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.958034389 +0000 UTC m=+0.091722051 container died 6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:33:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b00328096ded9c6dd1eeb68722520e83b2ef6ef97c3fc211127443bc4be9d0bc-merged.mount: Deactivated successfully.
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.978517333 +0000 UTC m=+0.112204996 container remove 6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:27 np0005591760 podman[82281]: 2026-01-22 09:33:27.88660725 +0000 UTC m=+0.020294932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:27 np0005591760 systemd[1]: libpod-conmon-6a3104045a3f586aabf7382b78a6423cc20eb9cfc69548410e29e1a1e548b985.scope: Deactivated successfully.
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.094834368 +0000 UTC m=+0.029578237 container create d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:33:28 np0005591760 systemd[1]: Started libpod-conmon-d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea.scope.
Jan 22 04:33:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b974c8e0ef58ebf4ce7a085fc7f846124e56b5c64f74f7f8d00d7f8af01f9965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b974c8e0ef58ebf4ce7a085fc7f846124e56b5c64f74f7f8d00d7f8af01f9965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b974c8e0ef58ebf4ce7a085fc7f846124e56b5c64f74f7f8d00d7f8af01f9965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b974c8e0ef58ebf4ce7a085fc7f846124e56b5c64f74f7f8d00d7f8af01f9965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.148831858 +0000 UTC m=+0.083575727 container init d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.155089847 +0000 UTC m=+0.089833715 container start d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.157510342 +0000 UTC m=+0.092254211 container attach d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_morse, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.083345302 +0000 UTC m=+0.018089191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59bc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 lvm[82415]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:33:28 np0005591760 lvm[82415]: VG ceph_vg0 finished
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581da59b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:28 np0005591760 reverent_morse[82336]: {}
Jan 22 04:33:28 np0005591760 systemd[1]: libpod-d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea.scope: Deactivated successfully.
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.671180035 +0000 UTC m=+0.605923914 container died d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_morse, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:33:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b974c8e0ef58ebf4ce7a085fc7f846124e56b5c64f74f7f8d00d7f8af01f9965-merged.mount: Deactivated successfully.
Jan 22 04:33:28 np0005591760 podman[82317]: 2026-01-22 09:33:28.694222468 +0000 UTC m=+0.628966337 container remove d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_morse, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 systemd[1]: libpod-conmon-d2479036323d5214ab4572dbbc80c2843ef3b3636cb3e36d8a9e5b5949170cea.scope: Deactivated successfully.
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:33:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: load: jerasure load: lrc 
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 04:33:28 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 22 04:33:29 np0005591760 podman[82576]: 2026-01-22 09:33:29.329290198 +0000 UTC m=+0.037743275 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:29 np0005591760 podman[82576]: 2026-01-22 09:33:29.408086697 +0000 UTC m=+0.116539774 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount shared_bdev_used = 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: RocksDB version: 7.9.2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Git sha 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DB SUMMARY
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DB Session ID:  1BIKFEWQNQ6ANXTQCGC3
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: CURRENT file:  CURRENT
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.error_if_exists: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.create_if_missing: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                     Options.env: 0x5581db407dc0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                Options.info_log: 0x5581db40b7a0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                              Options.statistics: (nil)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.use_fsync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                              Options.db_log_dir: 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.write_buffer_manager: 0x5581db536a00
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.unordered_write: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.row_cache: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                              Options.wal_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.two_write_queues: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.wal_compression: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.atomic_flush: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_background_jobs: 4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_background_compactions: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_subcompactions: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.max_open_files: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Compression algorithms supported:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kZSTD supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kXpressCompression supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kZlibCompression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da6309b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da6309b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da6309b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: eb180ac4-62c9-4b1b-ab86-8c9dbd3d38d6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409488620, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409488802, "job": 1, "event": "recovery_finished"}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: freelist init
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: freelist _read_cfg
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs umount
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) close
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bdev(0x5581db46d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluefs mount shared_bdev_used = 4718592
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: RocksDB version: 7.9.2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Git sha 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DB SUMMARY
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DB Session ID:  1BIKFEWQNQ6ANXTQCGC2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: CURRENT file:  CURRENT
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.error_if_exists: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.create_if_missing: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                     Options.env: 0x5581db5da310
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                Options.info_log: 0x5581db40b940
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                              Options.statistics: (nil)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.use_fsync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                              Options.db_log_dir: 
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.write_buffer_manager: 0x5581db536a00
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.unordered_write: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.row_cache: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                              Options.wal_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.two_write_queues: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.wal_compression: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.atomic_flush: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_background_jobs: 4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_background_compactions: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_subcompactions: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.max_open_files: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Compression algorithms supported:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kZSTD supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kXpressCompression supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kZlibCompression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da631350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da6309b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da6309b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:           Options.merge_operator: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581db40bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5581da6309b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.compression: LZ4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.num_levels: 7
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.bloom_locality: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                               Options.ttl: 2592000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                       Options.enable_blob_files: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                           Options.min_blob_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: eb180ac4-62c9-4b1b-ab86-8c9dbd3d38d6
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409748318, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409751466, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074409, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "eb180ac4-62c9-4b1b-ab86-8c9dbd3d38d6", "db_session_id": "1BIKFEWQNQ6ANXTQCGC2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409755356, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074409, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "eb180ac4-62c9-4b1b-ab86-8c9dbd3d38d6", "db_session_id": "1BIKFEWQNQ6ANXTQCGC2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409756399, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074409, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "eb180ac4-62c9-4b1b-ab86-8c9dbd3d38d6", "db_session_id": "1BIKFEWQNQ6ANXTQCGC2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074409757068, "job": 1, "event": "recovery_finished"}
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5581db608000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: DB pointer 0x5581db5e8000
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 460.80 MB usage: 0
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: _get_class not permitted to load lua
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: _get_class not permitted to load sdk
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 load_pgs
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 load_pgs opened 0 pgs
Jan 22 04:33:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0[82181]: 2026-01-22T09:33:29.770+0000 7ff8d9ca0740 -1 osd.0 0 log_to_monitors true
Jan 22 04:33:29 np0005591760 ceph-osd[82185]: osd.0 0 log_to_monitors true
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 22 04:33:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.011352396 +0000 UTC m=+0.026865489 container create 20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:30 np0005591760 systemd[1]: Started libpod-conmon-20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd.scope.
Jan 22 04:33:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.066619016 +0000 UTC m=+0.082132119 container init 20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_turing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.071315135 +0000 UTC m=+0.086828229 container start 20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:30 np0005591760 cranky_turing[83147]: 167 167
Jan 22 04:33:30 np0005591760 systemd[1]: libpod-20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd.scope: Deactivated successfully.
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.075799666 +0000 UTC m=+0.091312769 container attach 20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.076238502 +0000 UTC m=+0.091751615 container died 20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_turing, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:33:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-30d185e7be2b1cac51ecc74e874a3921c35f9a4cbc3b1655ba3ddbfeaef072cf-merged.mount: Deactivated successfully.
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.092806488 +0000 UTC m=+0.108319581 container remove 20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 22 04:33:30 np0005591760 podman[83133]: 2026-01-22 09:33:30.00087778 +0000 UTC m=+0.016390873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:30 np0005591760 systemd[1]: libpod-conmon-20caf96db69dc4b343b648ce2701949f1ef1d06bb87af95bd0fe270ccd77bddd.scope: Deactivated successfully.
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 04:33:30 np0005591760 podman[83169]: 2026-01-22 09:33:30.205341343 +0000 UTC m=+0.026069651 container create c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:33:30 np0005591760 systemd[1]: Started libpod-conmon-c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73.scope.
Jan 22 04:33:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb284c31c3e379996434e6e47b0598e487eebf94dd7145a65c8424608bc440/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb284c31c3e379996434e6e47b0598e487eebf94dd7145a65c8424608bc440/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb284c31c3e379996434e6e47b0598e487eebf94dd7145a65c8424608bc440/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb284c31c3e379996434e6e47b0598e487eebf94dd7145a65c8424608bc440/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:30 np0005591760 podman[83169]: 2026-01-22 09:33:30.270758538 +0000 UTC m=+0.091486856 container init c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_boyd, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:33:30 np0005591760 podman[83169]: 2026-01-22 09:33:30.275137471 +0000 UTC m=+0.095865769 container start c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:33:30 np0005591760 podman[83169]: 2026-01-22 09:33:30.277293669 +0000 UTC m=+0.098021967 container attach c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_boyd, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:33:30 np0005591760 podman[83169]: 2026-01-22 09:33:30.194900951 +0000 UTC m=+0.015629269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 22 04:33:30 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 22 04:33:30 np0005591760 silly_boyd[83183]: [
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:    {
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "available": false,
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "being_replaced": false,
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "ceph_device_lvm": false,
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "lsm_data": {},
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "lvs": [],
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "path": "/dev/sr0",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "rejected_reasons": [
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "Insufficient space (<5GB)",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "Has a FileSystem"
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        ],
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        "sys_api": {
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "actuators": null,
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "device_nodes": [
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:                "sr0"
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            ],
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "devname": "sr0",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "human_readable_size": "474.00 KB",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "id_bus": "ata",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "model": "QEMU DVD-ROM",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "nr_requests": "64",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "parent": "/dev/sr0",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "partitions": {},
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "path": "/dev/sr0",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "removable": "1",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "rev": "2.5+",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "ro": "0",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "rotational": "1",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "sas_address": "",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "sas_device_handle": "",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "scheduler_mode": "mq-deadline",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "sectors": 0,
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "sectorsize": "2048",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "size": 485376.0,
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "support_discard": "2048",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "type": "disk",
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:            "vendor": "QEMU"
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:        }
Jan 22 04:33:30 np0005591760 silly_boyd[83183]:    }
Jan 22 04:33:30 np0005591760 silly_boyd[83183]: ]
Jan 22 04:33:30 np0005591760 systemd[1]: libpod-c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73.scope: Deactivated successfully.
Jan 22 04:33:30 np0005591760 podman[84345]: 2026-01-22 09:33:30.819421739 +0000 UTC m=+0.018315368 container died c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_boyd, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:33:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-aabb284c31c3e379996434e6e47b0598e487eebf94dd7145a65c8424608bc440-merged.mount: Deactivated successfully.
Jan 22 04:33:30 np0005591760 podman[84345]: 2026-01-22 09:33:30.839021021 +0000 UTC m=+0.037914648 container remove c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:33:30 np0005591760 systemd[1]: libpod-conmon-c5a0242081d20d4a674ebe08096ab2d76cfed27b956381c83918b1ba9fe2fc73.scope: Deactivated successfully.
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.7M
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.7M
Jan 22 04:33:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:33:30 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0 done with init, starting boot process
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0 start_boot
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 22 04:33:31 np0005591760 ceph-osd[82185]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1679360742; not ready for session (expect reconnect)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1582813310; not ready for session (expect reconnect)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:31 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 04:33:31 np0005591760 python3[84383]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:33:31 np0005591760 podman[84385]: 2026-01-22 09:33:31.582471249 +0000 UTC m=+0.028358840 container create bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8 (image=quay.io/ceph/ceph:v19, name=focused_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:33:31 np0005591760 systemd[1]: Started libpod-conmon-bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8.scope.
Jan 22 04:33:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:33:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3368d22d50189920e0db05db3425ba31808afe3e7b353f0c78c6946d6c699051/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3368d22d50189920e0db05db3425ba31808afe3e7b353f0c78c6946d6c699051/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3368d22d50189920e0db05db3425ba31808afe3e7b353f0c78c6946d6c699051/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:33:31 np0005591760 podman[84385]: 2026-01-22 09:33:31.649572896 +0000 UTC m=+0.095460496 container init bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8 (image=quay.io/ceph/ceph:v19, name=focused_jepsen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:33:31 np0005591760 podman[84385]: 2026-01-22 09:33:31.653819439 +0000 UTC m=+0.099707020 container start bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8 (image=quay.io/ceph/ceph:v19, name=focused_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:33:31 np0005591760 podman[84385]: 2026-01-22 09:33:31.65473395 +0000 UTC m=+0.100621531 container attach bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8 (image=quay.io/ceph/ceph:v19, name=focused_jepsen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:33:31 np0005591760 podman[84385]: 2026-01-22 09:33:31.570816763 +0000 UTC m=+0.016704364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: Adjusting osd_memory_target on compute-1 to  5248M
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: Adjusting osd_memory_target on compute-0 to 128.7M
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: Unable to set osd_memory_target on compute-0 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: from='osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 22 04:33:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979246552' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 04:33:31 np0005591760 focused_jepsen[84399]: 
Jan 22 04:33:31 np0005591760 focused_jepsen[84399]: {"fsid":"43df7a30-cf5f-5209-adfd-bf44298b19f2","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":73,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769074400,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-22T09:32:16:777810+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-22T09:32:16.778638+0000","services":{}},"progress_events":{}}
Jan 22 04:33:31 np0005591760 systemd[1]: libpod-bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8.scope: Deactivated successfully.
Jan 22 04:33:31 np0005591760 conmon[84399]: conmon bb01b154769acacd1cbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8.scope/container/memory.events
Jan 22 04:33:31 np0005591760 podman[84385]: 2026-01-22 09:33:31.98931906 +0000 UTC m=+0.435206641 container died bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8 (image=quay.io/ceph/ceph:v19, name=focused_jepsen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 22 04:33:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3368d22d50189920e0db05db3425ba31808afe3e7b353f0c78c6946d6c699051-merged.mount: Deactivated successfully.
Jan 22 04:33:32 np0005591760 podman[84385]: 2026-01-22 09:33:32.014471365 +0000 UTC m=+0.460358947 container remove bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8 (image=quay.io/ceph/ceph:v19, name=focused_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Jan 22 04:33:32 np0005591760 systemd[1]: libpod-conmon-bb01b154769acacd1cbfd812d8157bbebe9040d73635a92c5c65b6dfb7e0ebe8.scope: Deactivated successfully.
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1679360742; not ready for session (expect reconnect)
Jan 22 04:33:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1582813310; not ready for session (expect reconnect)
Jan 22 04:33:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:33:32
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [balancer INFO root] No pools available
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:33:32 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 73.526 iops: 18822.535 elapsed_sec: 0.159
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: log_channel(cluster) log [WRN] : OSD bench result of 18822.535110 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 0 waiting for initial osdmap
Jan 22 04:33:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0[82181]: 2026-01-22T09:33:32.941+0000 7ff8d5c23640 -1 osd.0 0 waiting for initial osdmap
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 04:33:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-osd-0[82181]: 2026-01-22T09:33:32.956+0000 7ff8d124b640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 set_numa_affinity not setting numa affinity
Jan 22 04:33:32 np0005591760 ceph-osd[82185]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 22 04:33:33 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1679360742; not ready for session (expect reconnect)
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:33 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 22 04:33:33 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1582813310; not ready for session (expect reconnect)
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:33 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: OSD bench result of 21198.326412 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: OSD bench result of 18822.535110 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310] boot
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742] boot
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:33:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:33:33 np0005591760 ceph-osd[82185]: osd.0 8 state: booting -> active
Jan 22 04:33:34 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] creating mgr pool
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: osd.1 [v2:192.168.122.101:6800/1582813310,v1:192.168.122.101:6801/1582813310] boot
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: osd.0 [v2:192.168.122.100:6802/1679360742,v1:192.168.122.100:6803/1679360742] boot
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 22 04:33:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 22 04:33:34 np0005591760 ceph-osd[82185]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 22 04:33:34 np0005591760 ceph-osd[82185]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 22 04:33:34 np0005591760 ceph-osd[82185]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 22 04:33:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 22 04:33:35 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] creating main.db for devicehealth
Jan 22 04:33:35 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 04:33:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 22 04:33:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rfmoog(active, since 64s)
Jan 22 04:33:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 22 04:33:37 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 04:33:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:38 np0005591760 ceph-mon[74254]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 04:33:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 22 04:33:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:33:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:33:48 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:33:48 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:33:48 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:33:48 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 4d0af610-82eb-4ec5-861b-ee9f5d59b801 (Updating mon deployment (+2 -> 3))
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 22 04:33:49 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 04:33:50 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: Deploying daemon mon.compute-2 on compute-2
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: Cluster is now healthy
Jan 22 04:33:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 22 04:33:51 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 22 04:33:51 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 22 04:33:51 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 04:33:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:33:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:33:52 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:52 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 04:33:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:53 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:53 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 04:33:54 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:54 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 04:33:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:55 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:55 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 04:33:56 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:56 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : monmap epoch 2
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : last_changed 2026-01-22T09:33:51.795675+0000
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : created 2026-01-22T09:32:15.320230+0000
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rfmoog(active, since 84s)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:56 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 22 04:33:56 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0 calling monitor election
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-2 calling monitor election
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: overall HEALTH_OK
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:56 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:33:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:57 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2868965996; not ready for session (expect reconnect)
Jan 22 04:33:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:57 np0005591760 ceph-mon[74254]: Deploying daemon mon.compute-1 on compute-1
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 4d0af610-82eb-4ec5-861b-ee9f5d59b801 (Updating mon deployment (+2 -> 3))
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 4d0af610-82eb-4ec5-861b-ee9f5d59b801 (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev f4fea7b4-d022-439f-bbab-0bb27c9e2c69 (Updating mgr deployment (+2 -> 3))
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.bisona", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.bisona", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.bisona", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.bisona on compute-2
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.bisona on compute-2
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:33:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 04:33:58 np0005591760 ceph-mgr[74522]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 22 04:33:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:33:58.797+0000 7f7eca5e3640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 22 04:33:59 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:33:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:33:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:33:59 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 04:33:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:33:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:33:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:33:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:33:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:34:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:34:00 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:34:00 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 04:34:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:01 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:34:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:34:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:34:01 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 04:34:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:34:02 np0005591760 python3[84476]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:02 np0005591760 podman[84478]: 2026-01-22 09:34:02.255978593 +0000 UTC m=+0.026811698 container create d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4 (image=quay.io/ceph/ceph:v19, name=reverent_chatterjee, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:34:02 np0005591760 systemd[1]: Started libpod-conmon-d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4.scope.
Jan 22 04:34:02 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eacb1262914b27c91ffdd3993fe327fa45d672a63c4b458d3795930faf930f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eacb1262914b27c91ffdd3993fe327fa45d672a63c4b458d3795930faf930f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eacb1262914b27c91ffdd3993fe327fa45d672a63c4b458d3795930faf930f9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:02 np0005591760 podman[84478]: 2026-01-22 09:34:02.310977278 +0000 UTC m=+0.081810393 container init d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4 (image=quay.io/ceph/ceph:v19, name=reverent_chatterjee, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:34:02 np0005591760 podman[84478]: 2026-01-22 09:34:02.315879796 +0000 UTC m=+0.086712911 container start d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4 (image=quay.io/ceph/ceph:v19, name=reverent_chatterjee, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:02 np0005591760 podman[84478]: 2026-01-22 09:34:02.317116525 +0000 UTC m=+0.087949629 container attach d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4 (image=quay.io/ceph/ceph:v19, name=reverent_chatterjee, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 22 04:34:02 np0005591760 podman[84478]: 2026-01-22 09:34:02.244483516 +0000 UTC m=+0.015316631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 3 completed events
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:34:02 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 22 04:34:03 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:34:03 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : last_changed 2026-01-22T09:33:58.139199+0000
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : created 2026-01-22T09:32:15.320230+0000
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rfmoog(active, since 90s)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.upcmhd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.upcmhd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.upcmhd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:03 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.upcmhd on compute-1
Jan 22 04:34:03 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.upcmhd on compute-1
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: Deploying daemon mgr.compute-2.bisona on compute-2
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0 calling monitor election
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-2 calling monitor election
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-1 calling monitor election
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: overall HEALTH_OK
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.upcmhd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:34:03 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.upcmhd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 04:34:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224318658' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 04:34:04 np0005591760 reverent_chatterjee[84492]: 
Jan 22 04:34:04 np0005591760 reverent_chatterjee[84492]: {"fsid":"43df7a30-cf5f-5209-adfd-bf44298b19f2","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":0,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":2,"osd_up_since":1769074413,"num_in_osds":2,"osd_in_since":1769074400,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55775232,"bytes_avail":42885509120,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2026-01-22T09:32:16:777810+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-22T09:33:35.101892+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"f4fea7b4-d022-439f-bbab-0bb27c9e2c69":{"message":"Updating mgr deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 22 04:34:04 np0005591760 systemd[1]: libpod-d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4.scope: Deactivated successfully.
Jan 22 04:34:04 np0005591760 conmon[84492]: conmon d3a640fa8add8d761ff6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4.scope/container/memory.events
Jan 22 04:34:04 np0005591760 podman[84478]: 2026-01-22 09:34:04.051899212 +0000 UTC m=+1.822732327 container died d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4 (image=quay.io/ceph/ceph:v19, name=reverent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7eacb1262914b27c91ffdd3993fe327fa45d672a63c4b458d3795930faf930f9-merged.mount: Deactivated successfully.
Jan 22 04:34:04 np0005591760 podman[84478]: 2026-01-22 09:34:04.076139319 +0000 UTC m=+1.846972424 container remove d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4 (image=quay.io/ceph/ceph:v19, name=reverent_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:04 np0005591760 systemd[1]: libpod-conmon-d3a640fa8add8d761ff62fbc7a744fc4f14b1aef3ec1832866bba2618c7da8f4.scope: Deactivated successfully.
Jan 22 04:34:04 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1757784221; not ready for session (expect reconnect)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: Deploying daemon mgr.compute-1.upcmhd on compute-1
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:04 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev f4fea7b4-d022-439f-bbab-0bb27c9e2c69 (Updating mgr deployment (+2 -> 3))
Jan 22 04:34:04 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event f4fea7b4-d022-439f-bbab-0bb27c9e2c69 (Updating mgr deployment (+2 -> 3)) in 6 seconds
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:04 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 21480eca-ab2e-4166-8985-3a35d1ec544c (Updating crash deployment (+1 -> 3))
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:04 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 22 04:34:04 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 22 04:34:04 np0005591760 python3[84552]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:04 np0005591760 podman[84553]: 2026-01-22 09:34:04.476331732 +0000 UTC m=+0.033945257 container create c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162 (image=quay.io/ceph/ceph:v19, name=awesome_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:04 np0005591760 systemd[1]: Started libpod-conmon-c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162.scope.
Jan 22 04:34:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4b93c176f3e108d6b7ec5aa8892c3f85e7a68a6e2aa008ae5db2c2ed22226d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4b93c176f3e108d6b7ec5aa8892c3f85e7a68a6e2aa008ae5db2c2ed22226d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:04 np0005591760 podman[84553]: 2026-01-22 09:34:04.526267066 +0000 UTC m=+0.083880591 container init c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162 (image=quay.io/ceph/ceph:v19, name=awesome_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:04 np0005591760 podman[84553]: 2026-01-22 09:34:04.530421095 +0000 UTC m=+0.088034619 container start c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162 (image=quay.io/ceph/ceph:v19, name=awesome_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:34:04 np0005591760 podman[84553]: 2026-01-22 09:34:04.531402483 +0000 UTC m=+0.089016007 container attach c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162 (image=quay.io/ceph/ceph:v19, name=awesome_rosalind, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:34:04 np0005591760 podman[84553]: 2026-01-22 09:34:04.462713769 +0000 UTC m=+0.020327303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 22 04:34:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1487156669' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mgr[74522]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 22 04:34:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:05.140+0000 7f7eca5e3640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: Deploying daemon crash.compute-2 on compute-2
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1487156669' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1487156669' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 22 04:34:05 np0005591760 awesome_rosalind[84565]: pool 'vms' created
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 22 04:34:05 np0005591760 systemd[1]: libpod-c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162.scope: Deactivated successfully.
Jan 22 04:34:05 np0005591760 podman[84553]: 2026-01-22 09:34:05.274182692 +0000 UTC m=+0.831796216 container died c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162 (image=quay.io/ceph/ceph:v19, name=awesome_rosalind, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:34:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ee4b93c176f3e108d6b7ec5aa8892c3f85e7a68a6e2aa008ae5db2c2ed22226d-merged.mount: Deactivated successfully.
Jan 22 04:34:05 np0005591760 podman[84553]: 2026-01-22 09:34:05.293206921 +0000 UTC m=+0.850820445 container remove c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162 (image=quay.io/ceph/ceph:v19, name=awesome_rosalind, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:34:05 np0005591760 systemd[1]: libpod-conmon-c8a40608afe0e85c32c5214cddbe4ed4c6a3cb4011c32f45edfb73700316b162.scope: Deactivated successfully.
Jan 22 04:34:05 np0005591760 python3[84627]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:05 np0005591760 podman[84628]: 2026-01-22 09:34:05.556140552 +0000 UTC m=+0.034310264 container create 000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f (image=quay.io/ceph/ceph:v19, name=pensive_panini, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:05 np0005591760 systemd[1]: Started libpod-conmon-000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f.scope.
Jan 22 04:34:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6f30ce48a873b17ad53faf7cd44b4c645954ec95e6020a2f0ac2d5b85478b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6f30ce48a873b17ad53faf7cd44b4c645954ec95e6020a2f0ac2d5b85478b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:05 np0005591760 podman[84628]: 2026-01-22 09:34:05.611440262 +0000 UTC m=+0.089609984 container init 000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f (image=quay.io/ceph/ceph:v19, name=pensive_panini, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:05 np0005591760 podman[84628]: 2026-01-22 09:34:05.617020916 +0000 UTC m=+0.095190619 container start 000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f (image=quay.io/ceph/ceph:v19, name=pensive_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 04:34:05 np0005591760 podman[84628]: 2026-01-22 09:34:05.619435662 +0000 UTC m=+0.097605384 container attach 000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f (image=quay.io/ceph/ceph:v19, name=pensive_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 21480eca-ab2e-4166-8985-3a35d1ec544c (Updating crash deployment (+1 -> 3))
Jan 22 04:34:05 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 21480eca-ab2e-4166-8985-3a35d1ec544c (Updating crash deployment (+1 -> 3)) in 1 seconds
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:05 np0005591760 podman[84628]: 2026-01-22 09:34:05.541554156 +0000 UTC m=+0.019723877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v51: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 22 04:34:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1829889370' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:06.009294926 +0000 UTC m=+0.026382390 container create 02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:06 np0005591760 systemd[1]: Started libpod-conmon-02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1.scope.
Jan 22 04:34:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:06.06063324 +0000 UTC m=+0.077720735 container init 02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:06.065875558 +0000 UTC m=+0.082963033 container start 02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:06.066939681 +0000 UTC m=+0.084027156 container attach 02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:34:06 np0005591760 exciting_payne[84761]: 167 167
Jan 22 04:34:06 np0005591760 systemd[1]: libpod-02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1.scope: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:06.069283884 +0000 UTC m=+0.086371358 container died 02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:34:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-190aa075bceaa3c46a3961a877bce52662aebcd94e030ab75689e83fe0de2962-merged.mount: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:06.086627449 +0000 UTC m=+0.103714924 container remove 02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:06 np0005591760 podman[84747]: 2026-01-22 09:34:05.998452337 +0000 UTC m=+0.015539832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:06 np0005591760 systemd[1]: libpod-conmon-02526fdd28afb277b2e88c085bb121f1edd27c1a274ce187154350e59ff3ade1.scope: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.196371822 +0000 UTC m=+0.028347539 container create 892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermi, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:06 np0005591760 systemd[1]: Started libpod-conmon-892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9.scope.
Jan 22 04:34:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d8d1283d3a865cd128eac83e9d902176ccab215a6a28af081b9aa5a53eec39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d8d1283d3a865cd128eac83e9d902176ccab215a6a28af081b9aa5a53eec39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d8d1283d3a865cd128eac83e9d902176ccab215a6a28af081b9aa5a53eec39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d8d1283d3a865cd128eac83e9d902176ccab215a6a28af081b9aa5a53eec39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d8d1283d3a865cd128eac83e9d902176ccab215a6a28af081b9aa5a53eec39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.258977935 +0000 UTC m=+0.090953653 container init 892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.264590259 +0000 UTC m=+0.096565977 container start 892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.265769669 +0000 UTC m=+0.097745407 container attach 892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1487156669' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1829889370' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.18431679 +0000 UTC m=+0.016292528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1829889370' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 22 04:34:06 np0005591760 pensive_panini[84640]: pool 'volumes' created
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 22 04:34:06 np0005591760 systemd[1]: libpod-000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f.scope: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84628]: 2026-01-22 09:34:06.297072881 +0000 UTC m=+0.775242583 container died 000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f (image=quay.io/ceph/ceph:v19, name=pensive_panini, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:34:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-cc6f30ce48a873b17ad53faf7cd44b4c645954ec95e6020a2f0ac2d5b85478b7-merged.mount: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84628]: 2026-01-22 09:34:06.314350233 +0000 UTC m=+0.792519935 container remove 000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f (image=quay.io/ceph/ceph:v19, name=pensive_panini, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:34:06 np0005591760 systemd[1]: libpod-conmon-000c11e02f1ff5f979b74da22e3ab0d72b9c40dc59ec4b2051b6527483dc1a2f.scope: Deactivated successfully.
Jan 22 04:34:06 np0005591760 nice_fermi[84797]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:34:06 np0005591760 nice_fermi[84797]: --> All data devices are unavailable
Jan 22 04:34:06 np0005591760 python3[84838]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:06 np0005591760 systemd[1]: libpod-892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9.scope: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.540653373 +0000 UTC m=+0.372629092 container died 892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c1d8d1283d3a865cd128eac83e9d902176ccab215a6a28af081b9aa5a53eec39-merged.mount: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84783]: 2026-01-22 09:34:06.567573765 +0000 UTC m=+0.399549484 container remove 892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:34:06 np0005591760 systemd[1]: libpod-conmon-892b87946a8b06da9cdec095c64110505d521403e302be9f404a15c6a7b472e9.scope: Deactivated successfully.
Jan 22 04:34:06 np0005591760 podman[84848]: 2026-01-22 09:34:06.581809161 +0000 UTC m=+0.035882282 container create cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e (image=quay.io/ceph/ceph:v19, name=vibrant_williamson, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:34:06 np0005591760 systemd[1]: Started libpod-conmon-cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e.scope.
Jan 22 04:34:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775a0edba8c96ba9b79d035bd595b1c6cad9657c7d8b23f426e93f9cccb76971/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775a0edba8c96ba9b79d035bd595b1c6cad9657c7d8b23f426e93f9cccb76971/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:06 np0005591760 podman[84848]: 2026-01-22 09:34:06.633926943 +0000 UTC m=+0.088000064 container init cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e (image=quay.io/ceph/ceph:v19, name=vibrant_williamson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:06 np0005591760 podman[84848]: 2026-01-22 09:34:06.640849693 +0000 UTC m=+0.094922804 container start cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e (image=quay.io/ceph/ceph:v19, name=vibrant_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:06 np0005591760 podman[84848]: 2026-01-22 09:34:06.642535747 +0000 UTC m=+0.096608859 container attach cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e (image=quay.io/ceph/ceph:v19, name=vibrant_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:34:06 np0005591760 podman[84848]: 2026-01-22 09:34:06.568581943 +0000 UTC m=+0.022655075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "e7fde3af-8dcc-4261-b14b-26da738aa0fb"} v 0)
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1162022378' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7fde3af-8dcc-4261-b14b-26da738aa0fb"}]: dispatch
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1162022378' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7fde3af-8dcc-4261-b14b-26da738aa0fb"}]': finished
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:06 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 13 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 22 04:34:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3569405702' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:06 np0005591760 podman[84978]: 2026-01-22 09:34:06.979813035 +0000 UTC m=+0.026755042 container create 6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:07 np0005591760 systemd[1]: Started libpod-conmon-6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d.scope.
Jan 22 04:34:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:07 np0005591760 podman[84978]: 2026-01-22 09:34:07.037562888 +0000 UTC m=+0.084504904 container init 6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lehmann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:07 np0005591760 podman[84978]: 2026-01-22 09:34:07.042231596 +0000 UTC m=+0.089173603 container start 6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lehmann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:34:07 np0005591760 podman[84978]: 2026-01-22 09:34:07.043433979 +0000 UTC m=+0.090376005 container attach 6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:34:07 np0005591760 tender_lehmann[84991]: 167 167
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 conmon[84991]: conmon 6dc05a32fc11d6804ed3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d.scope/container/memory.events
Jan 22 04:34:07 np0005591760 podman[84978]: 2026-01-22 09:34:07.045942692 +0000 UTC m=+0.092884708 container died 6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lehmann, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:34:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7f6eef8b34adee1eae0c756cf0309f4e4d3e4df3fe63a603c2839866caba1871-merged.mount: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[84978]: 2026-01-22 09:34:07.064470145 +0000 UTC m=+0.111412142 container remove 6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:34:07 np0005591760 podman[84978]: 2026-01-22 09:34:06.969202754 +0000 UTC m=+0.016144790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-conmon-6dc05a32fc11d6804ed32d398de1b11a7ca69267b7f1f934196ee2460309f43d.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.17455529 +0000 UTC m=+0.027438970 container create 509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_leakey, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:07 np0005591760 systemd[1]: Started libpod-conmon-509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc.scope.
Jan 22 04:34:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201897d4865a9c0d41d774b7c81445ab2da54aa75c289fce909bbdc89364c463/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201897d4865a9c0d41d774b7c81445ab2da54aa75c289fce909bbdc89364c463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201897d4865a9c0d41d774b7c81445ab2da54aa75c289fce909bbdc89364c463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201897d4865a9c0d41d774b7c81445ab2da54aa75c289fce909bbdc89364c463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.234012967 +0000 UTC m=+0.086896648 container init 509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.239006937 +0000 UTC m=+0.091890617 container start 509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.240046274 +0000 UTC m=+0.092929954 container attach 509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_leakey, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.163651226 +0000 UTC m=+0.016534925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1829889370' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/1162022378' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e7fde3af-8dcc-4261-b14b-26da738aa0fb"}]: dispatch
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/1162022378' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e7fde3af-8dcc-4261-b14b-26da738aa0fb"}]': finished
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3569405702' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]: {
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:    "0": [
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:        {
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "devices": [
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "/dev/loop3"
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            ],
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "lv_name": "ceph_lv0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "lv_size": "21470642176",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "name": "ceph_lv0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "tags": {
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.cluster_name": "ceph",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.crush_device_class": "",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.encrypted": "0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.osd_id": "0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.type": "block",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.vdo": "0",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:                "ceph.with_tpm": "0"
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            },
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "type": "block",
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:            "vg_name": "ceph_vg0"
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:        }
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]:    ]
Jan 22 04:34:07 np0005591760 friendly_leakey[85026]: }
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.477939371 +0000 UTC m=+0.330823061 container died 509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_leakey, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:34:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-201897d4865a9c0d41d774b7c81445ab2da54aa75c289fce909bbdc89364c463-merged.mount: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[85013]: 2026-01-22 09:34:07.500375113 +0000 UTC m=+0.353258792 container remove 509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_leakey, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-conmon-509a3491cc5f433b04633898c94b1548fc13ed96d8e5b9235f2f5942ec5c57dc.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v54: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.883341781 +0000 UTC m=+0.026274097 container create e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cori, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3569405702' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 22 04:34:07 np0005591760 vibrant_williamson[84869]: pool 'backups' created
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:07 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=13/15 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:07 np0005591760 podman[84848]: 2026-01-22 09:34:07.912016085 +0000 UTC m=+1.366089196 container died cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e (image=quay.io/ceph/ceph:v19, name=vibrant_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:34:07 np0005591760 systemd[1]: Started libpod-conmon-e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213.scope.
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-775a0edba8c96ba9b79d035bd595b1c6cad9657c7d8b23f426e93f9cccb76971-merged.mount: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[84848]: 2026-01-22 09:34:07.931413777 +0000 UTC m=+1.385486888 container remove cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e (image=quay.io/ceph/ceph:v19, name=vibrant_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-conmon-cf3f5736f4dba5e4f733ad12710c4577c1a8a2028222f856e0e250169a80584e.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.943397174 +0000 UTC m=+0.086329499 container init e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cori, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.948488267 +0000 UTC m=+0.091420582 container start e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.949811248 +0000 UTC m=+0.092743583 container attach e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:07 np0005591760 distracted_cori[85140]: 167 167
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213.scope: Deactivated successfully.
Jan 22 04:34:07 np0005591760 conmon[85140]: conmon e8e91bebf1e26302d534 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213.scope/container/memory.events
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.952172813 +0000 UTC m=+0.095105127 container died e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cori, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:34:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-983f4a81703f0f77e9c7b3f34061345b410f13c335c021ca74762e8c6261966c-merged.mount: Deactivated successfully.
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.872614008 +0000 UTC m=+0.015546344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:07 np0005591760 podman[85124]: 2026-01-22 09:34:07.971709908 +0000 UTC m=+0.114642222 container remove e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cori, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:07 np0005591760 systemd[1]: libpod-conmon-e8e91bebf1e26302d534f81f3011ee601b56a53e038988c465b8a2630dfb0213.scope: Deactivated successfully.
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:08 np0005591760 podman[85196]: 2026-01-22 09:34:08.084084771 +0000 UTC m=+0.026778216 container create 8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_chandrasekhar, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:34:08 np0005591760 systemd[1]: Started libpod-conmon-8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c.scope.
Jan 22 04:34:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b9e49fc4c71bd91bc6ff4a06f80493332e10467827912053cd922a77390d65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b9e49fc4c71bd91bc6ff4a06f80493332e10467827912053cd922a77390d65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b9e49fc4c71bd91bc6ff4a06f80493332e10467827912053cd922a77390d65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b9e49fc4c71bd91bc6ff4a06f80493332e10467827912053cd922a77390d65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:08 np0005591760 podman[85196]: 2026-01-22 09:34:08.13727395 +0000 UTC m=+0.079967414 container init 8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 22 04:34:08 np0005591760 podman[85196]: 2026-01-22 09:34:08.145338249 +0000 UTC m=+0.088031684 container start 8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:34:08 np0005591760 podman[85196]: 2026-01-22 09:34:08.146316359 +0000 UTC m=+0.089009905 container attach 8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:34:08 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 5 completed events
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:34:08 np0005591760 podman[85196]: 2026-01-22 09:34:08.073580188 +0000 UTC m=+0.016273653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:08 np0005591760 python3[85194]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.214912869 +0000 UTC m=+0.029679176 container create d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb (image=quay.io/ceph/ceph:v19, name=priceless_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:34:08 np0005591760 systemd[1]: Started libpod-conmon-d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb.scope.
Jan 22 04:34:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2b98d230d65bdb9284597b2042c5fb009d147c6e2665044a34e21f7f44e7a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2b98d230d65bdb9284597b2042c5fb009d147c6e2665044a34e21f7f44e7a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.265334528 +0000 UTC m=+0.080100855 container init d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb (image=quay.io/ceph/ceph:v19, name=priceless_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.269720283 +0000 UTC m=+0.084486590 container start d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb (image=quay.io/ceph/ceph:v19, name=priceless_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.273805223 +0000 UTC m=+0.088571529 container attach d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb (image=quay.io/ceph/ceph:v19, name=priceless_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3569405702' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.201385837 +0000 UTC m=+0.016152164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2679896897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:08 np0005591760 lvm[85323]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:34:08 np0005591760 lvm[85323]: VG ceph_vg0 finished
Jan 22 04:34:08 np0005591760 strange_chandrasekhar[85210]: {}
Jan 22 04:34:08 np0005591760 systemd[1]: libpod-8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c.scope: Deactivated successfully.
Jan 22 04:34:08 np0005591760 podman[85326]: 2026-01-22 09:34:08.691278353 +0000 UTC m=+0.017982669 container died 8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 22 04:34:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-30b9e49fc4c71bd91bc6ff4a06f80493332e10467827912053cd922a77390d65-merged.mount: Deactivated successfully.
Jan 22 04:34:08 np0005591760 podman[85326]: 2026-01-22 09:34:08.709637891 +0000 UTC m=+0.036342208 container remove 8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:08 np0005591760 systemd[1]: libpod-conmon-8ce8616cbd87ed8a2658a4683f0b09b641ee0e0eda4e73ba40f2941bbc3bb34c.scope: Deactivated successfully.
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2679896897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 22 04:34:08 np0005591760 priceless_bartik[85227]: pool 'images' created
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:08 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:08 np0005591760 systemd[1]: libpod-d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb.scope: Deactivated successfully.
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.913597807 +0000 UTC m=+0.728364114 container died d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb (image=quay.io/ceph/ceph:v19, name=priceless_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7f2b98d230d65bdb9284597b2042c5fb009d147c6e2665044a34e21f7f44e7a4-merged.mount: Deactivated successfully.
Jan 22 04:34:08 np0005591760 podman[85215]: 2026-01-22 09:34:08.932219599 +0000 UTC m=+0.746985906 container remove d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb (image=quay.io/ceph/ceph:v19, name=priceless_bartik, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:34:08 np0005591760 systemd[1]: libpod-conmon-d262b98cfc4845512c917b6f8269c944296f0a41aa2c7c0ede4f50f703477bdb.scope: Deactivated successfully.
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona started
Jan 22 04:34:09 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mgr.compute-2.bisona 192.168.122.102:0/17978669; not ready for session (expect reconnect)
Jan 22 04:34:09 np0005591760 python3[85373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.191044706 +0000 UTC m=+0.027782525 container create e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929 (image=quay.io/ceph/ceph:v19, name=silly_volhard, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:34:09 np0005591760 systemd[1]: Started libpod-conmon-e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929.scope.
Jan 22 04:34:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98e178d091fc86d5abc84d7eab34fd643873d1eb70bc4f314369c3162fd84b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98e178d091fc86d5abc84d7eab34fd643873d1eb70bc4f314369c3162fd84b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.256054134 +0000 UTC m=+0.092791954 container init e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929 (image=quay.io/ceph/ceph:v19, name=silly_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.260509069 +0000 UTC m=+0.097246889 container start e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929 (image=quay.io/ceph/ceph:v19, name=silly_volhard, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.261570497 +0000 UTC m=+0.098308316 container attach e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929 (image=quay.io/ceph/ceph:v19, name=silly_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.180117538 +0000 UTC m=+0.016855357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2679896897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2679896897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd started
Jan 22 04:34:09 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from mgr.compute-1.upcmhd 192.168.122.101:0/3004199765; not ready for session (expect reconnect)
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1916958918' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v57: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1916958918' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:09 np0005591760 silly_volhard[85386]: pool 'cephfs.cephfs.meta' created
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:09 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:09 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:09 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.926002029 +0000 UTC m=+0.762739849 container died e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929 (image=quay.io/ceph/ceph:v19, name=silly_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:09 np0005591760 systemd[1]: libpod-e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929.scope: Deactivated successfully.
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rfmoog(active, since 97s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.bisona", "id": "compute-2.bisona"} v 0)
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-2.bisona", "id": "compute-2.bisona"}]: dispatch
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.upcmhd", "id": "compute-1.upcmhd"} v 0)
Jan 22 04:34:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-1.upcmhd", "id": "compute-1.upcmhd"}]: dispatch
Jan 22 04:34:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e98e178d091fc86d5abc84d7eab34fd643873d1eb70bc4f314369c3162fd84b5-merged.mount: Deactivated successfully.
Jan 22 04:34:09 np0005591760 podman[85374]: 2026-01-22 09:34:09.943684242 +0000 UTC m=+0.780422062 container remove e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929 (image=quay.io/ceph/ceph:v19, name=silly_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:34:09 np0005591760 systemd[1]: libpod-conmon-e9d34fb7254291b5ffc82195cdb9060d780182ac709503ec0ab31e6dd27b8929.scope: Deactivated successfully.
Jan 22 04:34:10 np0005591760 python3[85448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.193488108 +0000 UTC m=+0.028346238 container create 6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1 (image=quay.io/ceph/ceph:v19, name=dreamy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 04:34:10 np0005591760 systemd[1]: Started libpod-conmon-6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1.scope.
Jan 22 04:34:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5044228be6151b54d53f8ee9613d5267a32ad8fc3a92f2284d348e59e49d2a57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5044228be6151b54d53f8ee9613d5267a32ad8fc3a92f2284d348e59e49d2a57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.243176578 +0000 UTC m=+0.078034727 container init 6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1 (image=quay.io/ceph/ceph:v19, name=dreamy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.246933158 +0000 UTC m=+0.081791288 container start 6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1 (image=quay.io/ceph/ceph:v19, name=dreamy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.247867728 +0000 UTC m=+0.082725877 container attach 6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1 (image=quay.io/ceph/ceph:v19, name=dreamy_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.182389588 +0000 UTC m=+0.017247738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1916958918' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1916958918' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1481446863' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1481446863' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 22 04:34:10 np0005591760 dreamy_yonath[85461]: pool 'cephfs.cephfs.data' created
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:10 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:10 np0005591760 systemd[1]: libpod-6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1.scope: Deactivated successfully.
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.929929977 +0000 UTC m=+0.764788105 container died 6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1 (image=quay.io/ceph/ceph:v19, name=dreamy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5044228be6151b54d53f8ee9613d5267a32ad8fc3a92f2284d348e59e49d2a57-merged.mount: Deactivated successfully.
Jan 22 04:34:10 np0005591760 podman[85449]: 2026-01-22 09:34:10.947196306 +0000 UTC m=+0.782054435 container remove 6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1 (image=quay.io/ceph/ceph:v19, name=dreamy_yonath, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:34:10 np0005591760 systemd[1]: libpod-conmon-6a07aba5c62287b4fea3edc0de2e1180fa0f3bba7a2a4aeac82f0083e76c0ce1.scope: Deactivated successfully.
Jan 22 04:34:11 np0005591760 python3[85523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.225176349 +0000 UTC m=+0.028065900 container create 9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c (image=quay.io/ceph/ceph:v19, name=sweet_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:11 np0005591760 systemd[1]: Started libpod-conmon-9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c.scope.
Jan 22 04:34:11 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65acfeff24eb27c045e1d300e09f4c1072cbd69ac471e99967eafae03568ee69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65acfeff24eb27c045e1d300e09f4c1072cbd69ac471e99967eafae03568ee69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.280347208 +0000 UTC m=+0.083236769 container init 9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c (image=quay.io/ceph/ceph:v19, name=sweet_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.283882743 +0000 UTC m=+0.086772294 container start 9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c (image=quay.io/ceph/ceph:v19, name=sweet_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.287798574 +0000 UTC m=+0.090688135 container attach 9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c (image=quay.io/ceph/ceph:v19, name=sweet_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1481446863' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1481446863' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.213864477 +0000 UTC m=+0.016754048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:11 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 22 04:34:11 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133931872' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 04:34:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133931872' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 22 04:34:11 np0005591760 sweet_merkle[85537]: enabled application 'rbd' on pool 'vms'
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:11 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:11 np0005591760 systemd[1]: libpod-9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c.scope: Deactivated successfully.
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.931627064 +0000 UTC m=+0.734516616 container died 9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c (image=quay.io/ceph/ceph:v19, name=sweet_merkle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:11 np0005591760 systemd[1]: var-lib-containers-storage-overlay-65acfeff24eb27c045e1d300e09f4c1072cbd69ac471e99967eafae03568ee69-merged.mount: Deactivated successfully.
Jan 22 04:34:11 np0005591760 podman[85524]: 2026-01-22 09:34:11.950648668 +0000 UTC m=+0.753538220 container remove 9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c (image=quay.io/ceph/ceph:v19, name=sweet_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:11 np0005591760 systemd[1]: libpod-conmon-9647e87800aea6e8d1e092615527d2d6a71329b2df49ffe6e7ef8c6cd139db0c.scope: Deactivated successfully.
Jan 22 04:34:12 np0005591760 python3[85597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.216056157 +0000 UTC m=+0.026552742 container create 671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c (image=quay.io/ceph/ceph:v19, name=vigilant_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:12 np0005591760 systemd[1]: Started libpod-conmon-671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c.scope.
Jan 22 04:34:12 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ce37de7ba5ca79c06a968ead043a2b566d4ce25f117623bcf20c899579b965/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ce37de7ba5ca79c06a968ead043a2b566d4ce25f117623bcf20c899579b965/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.258899841 +0000 UTC m=+0.069396426 container init 671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c (image=quay.io/ceph/ceph:v19, name=vigilant_archimedes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.263097112 +0000 UTC m=+0.073593697 container start 671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c (image=quay.io/ceph/ceph:v19, name=vigilant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.264178386 +0000 UTC m=+0.074674972 container attach 671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c (image=quay.io/ceph/ceph:v19, name=vigilant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.205486842 +0000 UTC m=+0.015983447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: Deploying daemon osd.2 on compute-2
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2133931872' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/2133931872' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1682354899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1682354899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Jan 22 04:34:12 np0005591760 vigilant_archimedes[85611]: enabled application 'rbd' on pool 'volumes'
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:12 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:12 np0005591760 systemd[1]: libpod-671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c.scope: Deactivated successfully.
Jan 22 04:34:12 np0005591760 conmon[85611]: conmon 671383b918173535867b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c.scope/container/memory.events
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.936059182 +0000 UTC m=+0.746555777 container died 671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c (image=quay.io/ceph/ceph:v19, name=vigilant_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:34:12 np0005591760 systemd[1]: var-lib-containers-storage-overlay-24ce37de7ba5ca79c06a968ead043a2b566d4ce25f117623bcf20c899579b965-merged.mount: Deactivated successfully.
Jan 22 04:34:12 np0005591760 podman[85598]: 2026-01-22 09:34:12.954173278 +0000 UTC m=+0.764669864 container remove 671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c (image=quay.io/ceph/ceph:v19, name=vigilant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:34:12 np0005591760 systemd[1]: libpod-conmon-671383b918173535867bb947757e82d07a5ab5cc3b50adc1d6192c987c0fb75c.scope: Deactivated successfully.
Jan 22 04:34:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:13 np0005591760 python3[85670]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:13 np0005591760 podman[85671]: 2026-01-22 09:34:13.192769478 +0000 UTC m=+0.024044550 container create 4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f (image=quay.io/ceph/ceph:v19, name=stupefied_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:13 np0005591760 systemd[1]: Started libpod-conmon-4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f.scope.
Jan 22 04:34:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a19ac9d0d787afcf2c808e87d1dd0b3b8a2cf19f62121ba8468cd7edebc233/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a19ac9d0d787afcf2c808e87d1dd0b3b8a2cf19f62121ba8468cd7edebc233/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:13 np0005591760 podman[85671]: 2026-01-22 09:34:13.251901402 +0000 UTC m=+0.083176474 container init 4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f (image=quay.io/ceph/ceph:v19, name=stupefied_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:34:13 np0005591760 podman[85671]: 2026-01-22 09:34:13.255892995 +0000 UTC m=+0.087168067 container start 4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f (image=quay.io/ceph/ceph:v19, name=stupefied_kirch, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:13 np0005591760 podman[85671]: 2026-01-22 09:34:13.25688923 +0000 UTC m=+0.088164303 container attach 4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f (image=quay.io/ceph/ceph:v19, name=stupefied_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:13 np0005591760 podman[85671]: 2026-01-22 09:34:13.183103914 +0000 UTC m=+0.014379006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:13 np0005591760 ceph-mon[74254]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:13 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1682354899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 04:34:13 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1682354899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 04:34:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 22 04:34:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1107707631' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 04:34:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1107707631' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1107707631' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Jan 22 04:34:14 np0005591760 stupefied_kirch[85684]: enabled application 'rbd' on pool 'backups'
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:14 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:14 np0005591760 systemd[1]: libpod-4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f.scope: Deactivated successfully.
Jan 22 04:34:14 np0005591760 conmon[85684]: conmon 4cfb49874d76d708f869 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f.scope/container/memory.events
Jan 22 04:34:14 np0005591760 podman[85671]: 2026-01-22 09:34:14.338772694 +0000 UTC m=+1.170047776 container died 4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f (image=quay.io/ceph/ceph:v19, name=stupefied_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-83a19ac9d0d787afcf2c808e87d1dd0b3b8a2cf19f62121ba8468cd7edebc233-merged.mount: Deactivated successfully.
Jan 22 04:34:14 np0005591760 podman[85671]: 2026-01-22 09:34:14.357267076 +0000 UTC m=+1.188542148 container remove 4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f (image=quay.io/ceph/ceph:v19, name=stupefied_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 04:34:14 np0005591760 systemd[1]: libpod-conmon-4cfb49874d76d708f86947ba0cb8996e9d3f836a26ec272eb1c5ccff7d5c430f.scope: Deactivated successfully.
Jan 22 04:34:14 np0005591760 python3[85744]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:14 np0005591760 podman[85745]: 2026-01-22 09:34:14.604309956 +0000 UTC m=+0.023756107 container create a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8 (image=quay.io/ceph/ceph:v19, name=gifted_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:34:14 np0005591760 systemd[1]: Started libpod-conmon-a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8.scope.
Jan 22 04:34:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb1802536efe90a6e4840ecc1c33d966665b09b86ceb3c893ca3b4f3cce5b4d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb1802536efe90a6e4840ecc1c33d966665b09b86ceb3c893ca3b4f3cce5b4d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:14 np0005591760 podman[85745]: 2026-01-22 09:34:14.664885598 +0000 UTC m=+0.084331750 container init a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8 (image=quay.io/ceph/ceph:v19, name=gifted_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:14 np0005591760 podman[85745]: 2026-01-22 09:34:14.668693705 +0000 UTC m=+0.088139847 container start a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8 (image=quay.io/ceph/ceph:v19, name=gifted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:14 np0005591760 podman[85745]: 2026-01-22 09:34:14.670005444 +0000 UTC m=+0.089451596 container attach a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8 (image=quay.io/ceph/ceph:v19, name=gifted_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:14 np0005591760 podman[85745]: 2026-01-22 09:34:14.594332164 +0000 UTC m=+0.013778336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 22 04:34:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/578410127' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1107707631' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/578410127' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/578410127' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Jan 22 04:34:15 np0005591760 gifted_brahmagupta[85757]: enabled application 'rbd' on pool 'images'
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:15 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:15 np0005591760 systemd[1]: libpod-a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8.scope: Deactivated successfully.
Jan 22 04:34:15 np0005591760 conmon[85757]: conmon a7ffb63249f6cd18a2e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8.scope/container/memory.events
Jan 22 04:34:15 np0005591760 podman[85745]: 2026-01-22 09:34:15.349493808 +0000 UTC m=+0.768939961 container died a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8 (image=quay.io/ceph/ceph:v19, name=gifted_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ffb1802536efe90a6e4840ecc1c33d966665b09b86ceb3c893ca3b4f3cce5b4d-merged.mount: Deactivated successfully.
Jan 22 04:34:15 np0005591760 podman[85745]: 2026-01-22 09:34:15.37098396 +0000 UTC m=+0.790430112 container remove a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8 (image=quay.io/ceph/ceph:v19, name=gifted_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:34:15 np0005591760 systemd[1]: libpod-conmon-a7ffb63249f6cd18a2e15bc6ecd181ff3910c0d8c2d9231b3b7e9374ff8ff7c8.scope: Deactivated successfully.
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:15 np0005591760 python3[85818]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:15 np0005591760 podman[85844]: 2026-01-22 09:34:15.626560712 +0000 UTC m=+0.027962434 container create dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0 (image=quay.io/ceph/ceph:v19, name=youthful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 04:34:15 np0005591760 systemd[1]: Started libpod-conmon-dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0.scope.
Jan 22 04:34:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbd2dba4a0f8d9491b481a4cf637249bc042379aead98af921ee4628837cadd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbd2dba4a0f8d9491b481a4cf637249bc042379aead98af921ee4628837cadd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:15 np0005591760 podman[85844]: 2026-01-22 09:34:15.675028323 +0000 UTC m=+0.076430055 container init dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0 (image=quay.io/ceph/ceph:v19, name=youthful_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:15 np0005591760 podman[85844]: 2026-01-22 09:34:15.680529758 +0000 UTC m=+0.081931470 container start dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0 (image=quay.io/ceph/ceph:v19, name=youthful_mcnulty, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:15 np0005591760 podman[85844]: 2026-01-22 09:34:15.681485097 +0000 UTC m=+0.082886809 container attach dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0 (image=quay.io/ceph/ceph:v19, name=youthful_mcnulty, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:15 np0005591760 podman[85844]: 2026-01-22 09:34:15.61519577 +0000 UTC m=+0.016597502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 22 04:34:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1249265143' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/578410127' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1249265143' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1249265143' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Jan 22 04:34:16 np0005591760 youthful_mcnulty[85879]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:16 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:16 np0005591760 systemd[1]: libpod-dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0.scope: Deactivated successfully.
Jan 22 04:34:16 np0005591760 podman[85844]: 2026-01-22 09:34:16.501477852 +0000 UTC m=+0.902879564 container died dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0 (image=quay.io/ceph/ceph:v19, name=youthful_mcnulty, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:34:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay-bcbd2dba4a0f8d9491b481a4cf637249bc042379aead98af921ee4628837cadd-merged.mount: Deactivated successfully.
Jan 22 04:34:16 np0005591760 podman[85844]: 2026-01-22 09:34:16.52407654 +0000 UTC m=+0.925478251 container remove dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0 (image=quay.io/ceph/ceph:v19, name=youthful_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:16 np0005591760 systemd[1]: libpod-conmon-dc6b4a5f7b9e387164b942f537787bf33dde8ea0ee64c9af0838c4a5e9fb5ac0.scope: Deactivated successfully.
Jan 22 04:34:16 np0005591760 python3[85994]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:16 np0005591760 podman[85995]: 2026-01-22 09:34:16.79528689 +0000 UTC m=+0.029640404 container create f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e (image=quay.io/ceph/ceph:v19, name=crazy_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 22 04:34:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 04:34:16 np0005591760 systemd[1]: Started libpod-conmon-f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e.scope.
Jan 22 04:34:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f95cbc29e30be967ed0e667b653f06f581ed3991fff38281a921df4852bc774/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f95cbc29e30be967ed0e667b653f06f581ed3991fff38281a921df4852bc774/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:16 np0005591760 podman[85995]: 2026-01-22 09:34:16.85400764 +0000 UTC m=+0.088361175 container init f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e (image=quay.io/ceph/ceph:v19, name=crazy_diffie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:16 np0005591760 podman[85995]: 2026-01-22 09:34:16.857885409 +0000 UTC m=+0.092238923 container start f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e (image=quay.io/ceph/ceph:v19, name=crazy_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:16 np0005591760 podman[85995]: 2026-01-22 09:34:16.858903145 +0000 UTC m=+0.093256659 container attach f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e (image=quay.io/ceph/ceph:v19, name=crazy_diffie, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:16 np0005591760 podman[85995]: 2026-01-22 09:34:16.781582474 +0000 UTC m=+0.015935998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/909696606' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1249265143' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: from='osd.2 [v2:192.168.122.102:6800/2745861301,v1:192.168.122.102:6801/2745861301]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/909696606' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/909696606' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Jan 22 04:34:17 np0005591760 crazy_diffie[86007]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:17 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 04:34:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e24 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Jan 22 04:34:17 np0005591760 systemd[1]: libpod-f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e.scope: Deactivated successfully.
Jan 22 04:34:17 np0005591760 conmon[86007]: conmon f5a04efd7a838dc46a20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e.scope/container/memory.events
Jan 22 04:34:17 np0005591760 podman[85995]: 2026-01-22 09:34:17.506923136 +0000 UTC m=+0.741276650 container died f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e (image=quay.io/ceph/ceph:v19, name=crazy_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1f95cbc29e30be967ed0e667b653f06f581ed3991fff38281a921df4852bc774-merged.mount: Deactivated successfully.
Jan 22 04:34:17 np0005591760 podman[85995]: 2026-01-22 09:34:17.524346191 +0000 UTC m=+0.758699704 container remove f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e (image=quay.io/ceph/ceph:v19, name=crazy_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:17 np0005591760 systemd[1]: libpod-conmon-f5a04efd7a838dc46a2030682becb8176ed0b0b22dccd726c822b3689007b23e.scope: Deactivated successfully.
Jan 22 04:34:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.7M
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.7M
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:18 np0005591760 python3[86154]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Jan 22 04:34:18 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 25 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=15.425275803s) [] r=-1 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active pruub 64.155975342s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:34:18 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=13/15 n=0 ec=13/13 lis/c=13/13 les/c/f=15/15/0 sis=25 pruub=13.405175209s) [] r=-1 lpr=25 pi=[13,25)/1 crt=0'0 mlcod 0'0 active pruub 62.135890961s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:34:18 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 25 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=15.425275803s) [] r=-1 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.155975342s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:34:18 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=13/15 n=0 ec=13/13 lis/c=13/13 les/c/f=15/15/0 sis=25 pruub=13.405175209s) [] r=-1 lpr=25 pi=[13,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.135890961s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/909696606' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='osd.2 [v2:192.168.122.102:6800/2745861301,v1:192.168.122.102:6801/2745861301]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2745861301; not ready for session (expect reconnect)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:18 np0005591760 python3[86365]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074458.0960267-37710-124344510655001/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:19 np0005591760 python3[86736]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:34:19 np0005591760 python3[86861]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074458.830672-37724-35815809738447/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=25b896e13722509e3243c025eae46818aca72a97 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.388974688 +0000 UTC m=+0.028376023 container create 9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 04:34:19 np0005591760 systemd[1]: Started libpod-conmon-9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806.scope.
Jan 22 04:34:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.428714801 +0000 UTC m=+0.068116155 container init 9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.433003334 +0000 UTC m=+0.072404668 container start 9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.434097543 +0000 UTC m=+0.073498878 container attach 9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_panini, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:19 np0005591760 thirsty_panini[86929]: 167 167
Jan 22 04:34:19 np0005591760 systemd[1]: libpod-9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806.scope: Deactivated successfully.
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.436724959 +0000 UTC m=+0.076126293 container died 9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 22 04:34:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-417553c3a8830bd2ba39078e519966cb93b3b8b1e9bed0884563d84868c60497-merged.mount: Deactivated successfully.
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.453738895 +0000 UTC m=+0.093140228 container remove 9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:34:19 np0005591760 podman[86892]: 2026-01-22 09:34:19.377469854 +0000 UTC m=+0.016871198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:19 np0005591760 systemd[1]: libpod-conmon-9ba44889f0fc7ea3a0c030da2500411f2f06087d7c3c20564bafa44864dc4806.scope: Deactivated successfully.
Jan 22 04:34:19 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2745861301; not ready for session (expect reconnect)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:19 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Adjusting osd_memory_target on compute-2 to 128.7M
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Unable to set osd_memory_target on compute-2 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Cluster is now healthy
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:19 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.573471608 +0000 UTC m=+0.030029856 container create aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:34:19 np0005591760 systemd[1]: Started libpod-conmon-aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb.scope.
Jan 22 04:34:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:19 np0005591760 python3[86971]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1dd631e4b1e3da387bf315aa8fb95ee967d5e6bd435537b7a312faead78774/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1dd631e4b1e3da387bf315aa8fb95ee967d5e6bd435537b7a312faead78774/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1dd631e4b1e3da387bf315aa8fb95ee967d5e6bd435537b7a312faead78774/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1dd631e4b1e3da387bf315aa8fb95ee967d5e6bd435537b7a312faead78774/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1dd631e4b1e3da387bf315aa8fb95ee967d5e6bd435537b7a312faead78774/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.639674311 +0000 UTC m=+0.096232569 container init aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.644840575 +0000 UTC m=+0.101398824 container start aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.646541277 +0000 UTC m=+0.103099545 container attach aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.561077277 +0000 UTC m=+0.017635536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:19 np0005591760 podman[86993]: 2026-01-22 09:34:19.666267658 +0000 UTC m=+0.028522078 container create c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7 (image=quay.io/ceph/ceph:v19, name=naughty_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:19 np0005591760 systemd[1]: Started libpod-conmon-c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7.scope.
Jan 22 04:34:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52aa723c1507f29ccf5aead144dff8bd6165a4fdfbf5132c78302c0b5a659106/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52aa723c1507f29ccf5aead144dff8bd6165a4fdfbf5132c78302c0b5a659106/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52aa723c1507f29ccf5aead144dff8bd6165a4fdfbf5132c78302c0b5a659106/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:19 np0005591760 podman[86993]: 2026-01-22 09:34:19.726633295 +0000 UTC m=+0.088887715 container init c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7 (image=quay.io/ceph/ceph:v19, name=naughty_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:34:19 np0005591760 podman[86993]: 2026-01-22 09:34:19.733398007 +0000 UTC m=+0.095652428 container start c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7 (image=quay.io/ceph/ceph:v19, name=naughty_gauss, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:19 np0005591760 podman[86993]: 2026-01-22 09:34:19.734470697 +0000 UTC m=+0.096725108 container attach c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7 (image=quay.io/ceph/ceph:v19, name=naughty_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:19 np0005591760 podman[86993]: 2026-01-22 09:34:19.655264196 +0000 UTC m=+0.017518636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:19 np0005591760 cool_curran[86990]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:34:19 np0005591760 cool_curran[86990]: --> All data devices are unavailable
Jan 22 04:34:19 np0005591760 systemd[1]: libpod-aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb.scope: Deactivated successfully.
Jan 22 04:34:19 np0005591760 conmon[86990]: conmon aa78c2d108cfd05eb3fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb.scope/container/memory.events
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.92168422 +0000 UTC m=+0.378242468 container died aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fb1dd631e4b1e3da387bf315aa8fb95ee967d5e6bd435537b7a312faead78774-merged.mount: Deactivated successfully.
Jan 22 04:34:19 np0005591760 podman[86977]: 2026-01-22 09:34:19.943791482 +0000 UTC m=+0.400349730 container remove aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:34:19 np0005591760 systemd[1]: libpod-conmon-aa78c2d108cfd05eb3febc77fa9ececedf82ffafd5b68c70ccabb72adf5c9ffb.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4274044498' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4274044498' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 04:34:20 np0005591760 naughty_gauss[87007]: 
Jan 22 04:34:20 np0005591760 naughty_gauss[87007]: [global]
Jan 22 04:34:20 np0005591760 naughty_gauss[87007]: #011fsid = 43df7a30-cf5f-5209-adfd-bf44298b19f2
Jan 22 04:34:20 np0005591760 naughty_gauss[87007]: #011mon_host = 192.168.122.100
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[86993]: 2026-01-22 09:34:20.031324456 +0000 UTC m=+0.393578876 container died c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7 (image=quay.io/ceph/ceph:v19, name=naughty_gauss, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:20 np0005591760 podman[86993]: 2026-01-22 09:34:20.049459771 +0000 UTC m=+0.411714192 container remove c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7 (image=quay.io/ceph/ceph:v19, name=naughty_gauss, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-conmon-c30bde643b8083c6928865ffe10c2ba94759c58e3249560fd47baa2c6f1a63b7.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 python3[87137]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:20 np0005591760 podman[87160]: 2026-01-22 09:34:20.328476647 +0000 UTC m=+0.035118354 container create 822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25 (image=quay.io/ceph/ceph:v19, name=elated_golick, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:34:20 np0005591760 systemd[1]: Started libpod-conmon-822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25.scope.
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.364953497 +0000 UTC m=+0.036761858 container create b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:20 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b879de47f17eef04b2ba82244354645f9b7d5b5d6f88a7740fa1960234adb9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b879de47f17eef04b2ba82244354645f9b7d5b5d6f88a7740fa1960234adb9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b879de47f17eef04b2ba82244354645f9b7d5b5d6f88a7740fa1960234adb9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 podman[87160]: 2026-01-22 09:34:20.379304991 +0000 UTC m=+0.085946718 container init 822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25 (image=quay.io/ceph/ceph:v19, name=elated_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:34:20 np0005591760 systemd[1]: Started libpod-conmon-b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c.scope.
Jan 22 04:34:20 np0005591760 podman[87160]: 2026-01-22 09:34:20.385191521 +0000 UTC m=+0.091833229 container start 822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25 (image=quay.io/ceph/ceph:v19, name=elated_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 04:34:20 np0005591760 podman[87160]: 2026-01-22 09:34:20.387346397 +0000 UTC m=+0.093988104 container attach 822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25 (image=quay.io/ceph/ceph:v19, name=elated_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:34:20 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay-52aa723c1507f29ccf5aead144dff8bd6165a4fdfbf5132c78302c0b5a659106-merged.mount: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[87160]: 2026-01-22 09:34:20.312501296 +0000 UTC m=+0.019143024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.410671913 +0000 UTC m=+0.082480284 container init b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.415209244 +0000 UTC m=+0.087017605 container start b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.416344912 +0000 UTC m=+0.088153273 container attach b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:20 np0005591760 kind_chaum[87199]: 167 167
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.417869581 +0000 UTC m=+0.089677942 container died b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:34:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6b7dd19f409863136fc0e920b438bc3671c32bbe56fb60ca01764b36e9e6e3cd-merged.mount: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.435686829 +0000 UTC m=+0.107495179 container remove b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:34:20 np0005591760 podman[87177]: 2026-01-22 09:34:20.347745046 +0000 UTC m=+0.019553407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-conmon-b44c30a3957695f48f9b2e2ca0c116a790776002464592a12b3572ba1cb9627c.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 ceph-mgr[74522]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2745861301; not ready for session (expect reconnect)
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:20 np0005591760 ceph-mgr[74522]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/4274044498' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/4274044498' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.549421872 +0000 UTC m=+0.026182876 container create 2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:20 np0005591760 systemd[1]: Started libpod-conmon-2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0.scope.
Jan 22 04:34:20 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb8f4b2008ebca1a4f93273549a58fe40621188cdbe5cd4527f95bf036a8fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb8f4b2008ebca1a4f93273549a58fe40621188cdbe5cd4527f95bf036a8fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb8f4b2008ebca1a4f93273549a58fe40621188cdbe5cd4527f95bf036a8fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb8f4b2008ebca1a4f93273549a58fe40621188cdbe5cd4527f95bf036a8fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.604354813 +0000 UTC m=+0.081115836 container init 2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kilby, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.608561762 +0000 UTC m=+0.085322765 container start 2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.609665991 +0000 UTC m=+0.086427014 container attach 2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kilby, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.538872696 +0000 UTC m=+0.015633719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 22 04:34:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1861953495' entity='client.admin' 
Jan 22 04:34:20 np0005591760 elated_golick[87190]: set ssl_option
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 conmon[87190]: conmon 822493f8b82bac962c1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25.scope/container/memory.events
Jan 22 04:34:20 np0005591760 podman[87262]: 2026-01-22 09:34:20.803143043 +0000 UTC m=+0.018632062 container died 822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25 (image=quay.io/ceph/ceph:v19, name=elated_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]: {
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:    "0": [
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:        {
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "devices": [
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "/dev/loop3"
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            ],
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "lv_name": "ceph_lv0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "lv_size": "21470642176",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "name": "ceph_lv0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "tags": {
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.cluster_name": "ceph",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.crush_device_class": "",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.encrypted": "0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.osd_id": "0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.type": "block",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.vdo": "0",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:                "ceph.with_tpm": "0"
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            },
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "type": "block",
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:            "vg_name": "ceph_vg0"
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:        }
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]:    ]
Jan 22 04:34:20 np0005591760 adoring_kilby[87253]: }
Jan 22 04:34:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c0b879de47f17eef04b2ba82244354645f9b7d5b5d6f88a7740fa1960234adb9-merged.mount: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[87262]: 2026-01-22 09:34:20.820408462 +0000 UTC m=+0.035897461 container remove 822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25 (image=quay.io/ceph/ceph:v19, name=elated_golick, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-conmon-822493f8b82bac962c1fad2e5ea273c932ac0ca794859a1e753453d144ae1f25.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0.scope: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.837369447 +0000 UTC m=+0.314130450 container died 2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8cb8f4b2008ebca1a4f93273549a58fe40621188cdbe5cd4527f95bf036a8fbb-merged.mount: Deactivated successfully.
Jan 22 04:34:20 np0005591760 podman[87240]: 2026-01-22 09:34:20.865211615 +0000 UTC m=+0.341972618 container remove 2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:20 np0005591760 systemd[1]: libpod-conmon-2e86f6c5db08b05bf8024b77916d17067bce17540957e3e855f6ce278eadc1d0.scope: Deactivated successfully.
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2745861301,v1:192.168.122.102:6801/2745861301] boot
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:21 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 26 pg[3.0( empty local-lis/les=13/15 n=0 ec=13/13 lis/c=13/13 les/c/f=15/15/0 sis=26 pruub=10.882409096s) [2] r=-1 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.135890961s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:34:21 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 26 pg[3.0( empty local-lis/les=13/15 n=0 ec=13/13 lis/c=13/13 les/c/f=15/15/0 sis=26 pruub=10.882379532s) [2] r=-1 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.135890961s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:34:21 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 26 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=26 pruub=12.902312279s) [2] r=-1 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.155975342s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:34:21 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 26 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=26 pruub=12.902301788s) [2] r=-1 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.155975342s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:34:21 np0005591760 python3[87335]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.105766863 +0000 UTC m=+0.028261056 container create ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167 (image=quay.io/ceph/ceph:v19, name=dreamy_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:21 np0005591760 systemd[1]: Started libpod-conmon-ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167.scope.
Jan 22 04:34:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3095a42101ffc9bb91d909eeea7b07e6c8fd0884677a9b88ac935e7f53d61ba9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3095a42101ffc9bb91d909eeea7b07e6c8fd0884677a9b88ac935e7f53d61ba9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3095a42101ffc9bb91d909eeea7b07e6c8fd0884677a9b88ac935e7f53d61ba9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.162347906 +0000 UTC m=+0.084842099 container init ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167 (image=quay.io/ceph/ceph:v19, name=dreamy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.168115983 +0000 UTC m=+0.090610176 container start ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167 (image=quay.io/ceph/ceph:v19, name=dreamy_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.169439134 +0000 UTC m=+0.091933327 container attach ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167 (image=quay.io/ceph/ceph:v19, name=dreamy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.093957735 +0000 UTC m=+0.016451948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.275367003 +0000 UTC m=+0.033265528 container create 6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:34:21 np0005591760 systemd[1]: Started libpod-conmon-6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00.scope.
Jan 22 04:34:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.322947183 +0000 UTC m=+0.080845728 container init 6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.327824593 +0000 UTC m=+0.085723119 container start 6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.328848491 +0000 UTC m=+0.086747016 container attach 6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_germain, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:34:21 np0005591760 thirsty_germain[87441]: 167 167
Jan 22 04:34:21 np0005591760 systemd[1]: libpod-6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00.scope: Deactivated successfully.
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.330731125 +0000 UTC m=+0.088629680 container died 6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.347744268 +0000 UTC m=+0.105642793 container remove 6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:34:21 np0005591760 podman[87409]: 2026-01-22 09:34:21.261364525 +0000 UTC m=+0.019263070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:21 np0005591760 systemd[1]: libpod-conmon-6a20d258649f23017d67b322769f5e9ed6c6310e58b09929c6db5d5b0506cb00.scope: Deactivated successfully.
Jan 22 04:34:21 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:21 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:21 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:21 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 22 04:34:21 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:21 np0005591760 dreamy_roentgen[87373]: Scheduled rgw.rgw update...
Jan 22 04:34:21 np0005591760 dreamy_roentgen[87373]: Scheduled ingress.rgw.default update...
Jan 22 04:34:21 np0005591760 systemd[1]: libpod-ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167.scope: Deactivated successfully.
Jan 22 04:34:21 np0005591760 podman[87463]: 2026-01-22 09:34:21.468032186 +0000 UTC m=+0.030410652 container create 9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_torvalds, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.468955865 +0000 UTC m=+0.391450058 container died ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167 (image=quay.io/ceph/ceph:v19, name=dreamy_roentgen, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:34:21 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3095a42101ffc9bb91d909eeea7b07e6c8fd0884677a9b88ac935e7f53d61ba9-merged.mount: Deactivated successfully.
Jan 22 04:34:21 np0005591760 podman[87361]: 2026-01-22 09:34:21.487339358 +0000 UTC m=+0.409833551 container remove ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167 (image=quay.io/ceph/ceph:v19, name=dreamy_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 04:34:21 np0005591760 systemd[1]: libpod-conmon-ee45966efb4ce55d46fa10e1554f61e7fbf966be5c41ac8cf41f680897d6a167.scope: Deactivated successfully.
Jan 22 04:34:21 np0005591760 systemd[1]: Started libpod-conmon-9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528.scope.
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: OSD bench result of 22264.553075 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1861953495' entity='client.admin' 
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: osd.2 [v2:192.168.122.102:6800/2745861301,v1:192.168.122.102:6801/2745861301] boot
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:21 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7631277032174f828152b9d2190fb2e6b9e7fd311a440fa43c5a45cfe10beba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7631277032174f828152b9d2190fb2e6b9e7fd311a440fa43c5a45cfe10beba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7631277032174f828152b9d2190fb2e6b9e7fd311a440fa43c5a45cfe10beba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7631277032174f828152b9d2190fb2e6b9e7fd311a440fa43c5a45cfe10beba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:21 np0005591760 podman[87463]: 2026-01-22 09:34:21.456523003 +0000 UTC m=+0.018901479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:21 np0005591760 podman[87463]: 2026-01-22 09:34:21.565537742 +0000 UTC m=+0.127916208 container init 9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_torvalds, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:21 np0005591760 podman[87463]: 2026-01-22 09:34:21.569897818 +0000 UTC m=+0.132276274 container start 9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_torvalds, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:21 np0005591760 podman[87463]: 2026-01-22 09:34:21.571412129 +0000 UTC m=+0.133790586 container attach 9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:21 np0005591760 python3[87569]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 22 04:34:22 np0005591760 python3[87693]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074461.6095355-37743-135354449604283/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:34:22 np0005591760 lvm[87711]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:34:22 np0005591760 lvm[87711]: VG ceph_vg0 finished
Jan 22 04:34:22 np0005591760 gifted_torvalds[87488]: {}
Jan 22 04:34:22 np0005591760 systemd[1]: libpod-9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528.scope: Deactivated successfully.
Jan 22 04:34:22 np0005591760 systemd[1]: libpod-9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528.scope: Consumed 1.027s CPU time.
Jan 22 04:34:22 np0005591760 podman[87463]: 2026-01-22 09:34:22.199442122 +0000 UTC m=+0.761820578 container died 9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_torvalds, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:34:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b7631277032174f828152b9d2190fb2e6b9e7fd311a440fa43c5a45cfe10beba-merged.mount: Deactivated successfully.
Jan 22 04:34:22 np0005591760 podman[87463]: 2026-01-22 09:34:22.22523375 +0000 UTC m=+0.787612206 container remove 9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:34:22 np0005591760 systemd[1]: libpod-conmon-9b3950289352e82772ba33590f1cb407fc5ed82b36181f1312c6b572dbfd7528.scope: Deactivated successfully.
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: Saving service ingress.rgw.default spec with placement count:2
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:34:22 np0005591760 python3[87846]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:22 np0005591760 podman[87849]: 2026-01-22 09:34:22.684056604 +0000 UTC m=+0.038010368 container create 117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b (image=quay.io/ceph/ceph:v19, name=condescending_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:22 np0005591760 systemd[1]: Started libpod-conmon-117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b.scope.
Jan 22 04:34:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16898d77879096c04f2b5dae0fa337a1da779ef09a990e5021a928ec3cb6fb8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16898d77879096c04f2b5dae0fa337a1da779ef09a990e5021a928ec3cb6fb8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16898d77879096c04f2b5dae0fa337a1da779ef09a990e5021a928ec3cb6fb8f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:22 np0005591760 podman[87849]: 2026-01-22 09:34:22.74673786 +0000 UTC m=+0.100691634 container init 117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b (image=quay.io/ceph/ceph:v19, name=condescending_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:22 np0005591760 podman[87849]: 2026-01-22 09:34:22.753119962 +0000 UTC m=+0.107073726 container start 117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b (image=quay.io/ceph/ceph:v19, name=condescending_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:22 np0005591760 podman[87849]: 2026-01-22 09:34:22.754666204 +0000 UTC m=+0.108619967 container attach 117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b (image=quay.io/ceph/ceph:v19, name=condescending_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:34:22 np0005591760 podman[87849]: 2026-01-22 09:34:22.668916596 +0000 UTC m=+0.022870360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.798434357 +0000 UTC m=+0.027933741 container create 38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b (image=quay.io/ceph/ceph:v19, name=trusting_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:34:22 np0005591760 systemd[1]: Started libpod-conmon-38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b.scope.
Jan 22 04:34:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.848286986 +0000 UTC m=+0.077786389 container init 38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b (image=quay.io/ceph/ceph:v19, name=trusting_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.853631205 +0000 UTC m=+0.083130588 container start 38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b (image=quay.io/ceph/ceph:v19, name=trusting_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 04:34:22 np0005591760 trusting_bartik[87894]: 167 167
Jan 22 04:34:22 np0005591760 systemd[1]: libpod-38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b.scope: Deactivated successfully.
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.858625395 +0000 UTC m=+0.088124778 container attach 38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b (image=quay.io/ceph/ceph:v19, name=trusting_bartik, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:22 np0005591760 conmon[87894]: conmon 38f2251bfe36ee567301 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b.scope/container/memory.events
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.859492307 +0000 UTC m=+0.088991700 container died 38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b (image=quay.io/ceph/ceph:v19, name=trusting_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:34:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fefa7b2d2a29d425592c73c0669cd313052e9a1dd3ad6e2db4fcd1574b0f825f-merged.mount: Deactivated successfully.
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.787963328 +0000 UTC m=+0.017462731 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:22 np0005591760 podman[87880]: 2026-01-22 09:34:22.887400879 +0000 UTC m=+0.116900263 container remove 38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b (image=quay.io/ceph/ceph:v19, name=trusting_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:22 np0005591760 systemd[1]: libpod-conmon-38f2251bfe36ee5673017d0f1dadb4b65f73215d412956656087a8a14bfa5b4b.scope: Deactivated successfully.
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rfmoog (monmap changed)...
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rfmoog (monmap changed)...
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rfmoog", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rfmoog", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rfmoog on compute-0
Jan 22 04:34:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rfmoog on compute-0
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service node-exporter spec with placement *
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 condescending_keller[87872]: Scheduled node-exporter update...
Jan 22 04:34:23 np0005591760 condescending_keller[87872]: Scheduled grafana update...
Jan 22 04:34:23 np0005591760 condescending_keller[87872]: Scheduled prometheus update...
Jan 22 04:34:23 np0005591760 condescending_keller[87872]: Scheduled alertmanager update...
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[87849]: 2026-01-22 09:34:23.125181956 +0000 UTC m=+0.479135720 container died 117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b (image=quay.io/ceph/ceph:v19, name=condescending_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:34:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-16898d77879096c04f2b5dae0fa337a1da779ef09a990e5021a928ec3cb6fb8f-merged.mount: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[87849]: 2026-01-22 09:34:23.147082289 +0000 UTC m=+0.501036052 container remove 117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b (image=quay.io/ceph/ceph:v19, name=condescending_keller, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-conmon-117a1431f754484c39a7f6874789dea8efef93124ed0ed3d3f1d05bf53ddf88b.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.284318901 +0000 UTC m=+0.030419721 container create 9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7 (image=quay.io/ceph/ceph:v19, name=mystifying_feynman, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:34:23 np0005591760 systemd[1]: Started libpod-conmon-9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7.scope.
Jan 22 04:34:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.338437287 +0000 UTC m=+0.084538117 container init 9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7 (image=quay.io/ceph/ceph:v19, name=mystifying_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.343109181 +0000 UTC m=+0.089209991 container start 9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7 (image=quay.io/ceph/ceph:v19, name=mystifying_feynman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.344084146 +0000 UTC m=+0.090184956 container attach 9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7 (image=quay.io/ceph/ceph:v19, name=mystifying_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:34:23 np0005591760 mystifying_feynman[88018]: 167 167
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.347679624 +0000 UTC m=+0.093780434 container died 9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7 (image=quay.io/ceph/ceph:v19, name=mystifying_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.363568352 +0000 UTC m=+0.109669162 container remove 9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7 (image=quay.io/ceph/ceph:v19, name=mystifying_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 04:34:23 np0005591760 podman[88005]: 2026-01-22 09:34:23.272449919 +0000 UTC m=+0.018550730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-conmon-9083faf698c380d79f6fb4850a1290e10581ebe1a3fec845f179815abe4ccac7.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rfmoog", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:34:23 np0005591760 python3[88063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.594589594 +0000 UTC m=+0.028065228 container create 37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5 (image=quay.io/ceph/ceph:v19, name=busy_ptolemy, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 22 04:34:23 np0005591760 systemd[1]: Started libpod-conmon-37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5.scope.
Jan 22 04:34:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300a6ccd32b356bb59ea6ca3fec51b30202e215872e17e11a18a0265700eac02/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300a6ccd32b356bb59ea6ca3fec51b30202e215872e17e11a18a0265700eac02/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300a6ccd32b356bb59ea6ca3fec51b30202e215872e17e11a18a0265700eac02/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.651519073 +0000 UTC m=+0.084994728 container init 37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5 (image=quay.io/ceph/ceph:v19, name=busy_ptolemy, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.656761521 +0000 UTC m=+0.090237156 container start 37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5 (image=quay.io/ceph/ceph:v19, name=busy_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.657824611 +0000 UTC m=+0.091300246 container attach 37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5 (image=quay.io/ceph/ceph:v19, name=busy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.583506132 +0000 UTC m=+0.016981787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ed434c458730ce39a8b4d9343fbd44aa0c1448b9b8856bf0e174f0d2ccdedca3-merged.mount: Deactivated successfully.
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.756376145 +0000 UTC m=+0.031525012 container create 385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:34:23 np0005591760 systemd[1]: Started libpod-conmon-385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b.scope.
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.743484569 +0000 UTC m=+0.018633425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.85018954 +0000 UTC m=+0.125338406 container init 385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.854450902 +0000 UTC m=+0.129599758 container start 385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_elbakyan, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.855679785 +0000 UTC m=+0.130828651 container attach 385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:34:23 np0005591760 nifty_elbakyan[88171]: 167 167
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 conmon[88171]: conmon 385b7a052611cf88f118 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b.scope/container/memory.events
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.85851933 +0000 UTC m=+0.133668186 container died 385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7672cbcbd32e085e18bdb9e0c56b527ca0ebffaa081e25faf314a6c0f7ae61c8-merged.mount: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[88139]: 2026-01-22 09:34:23.877639249 +0000 UTC m=+0.152788105 container remove 385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-conmon-385b7a052611cf88f1184aab3e0291bfcfedfdb87167312bb974d47dc564033b.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Jan 22 04:34:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 22 04:34:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1083578268' entity='client.admin' 
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5.scope: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.955349243 +0000 UTC m=+0.388824878 container died 37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5 (image=quay.io/ceph/ceph:v19, name=busy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:34:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-300a6ccd32b356bb59ea6ca3fec51b30202e215872e17e11a18a0265700eac02-merged.mount: Deactivated successfully.
Jan 22 04:34:23 np0005591760 podman[88109]: 2026-01-22 09:34:23.979590382 +0000 UTC m=+0.413066017 container remove 37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5 (image=quay.io/ceph/ceph:v19, name=busy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:23 np0005591760 systemd[1]: libpod-conmon-37558e9b4d994d584ed7c6861e185c49408ddd078f8b7846aa23218556ecf7d5.scope: Deactivated successfully.
Jan 22 04:34:24 np0005591760 python3[88270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.262174771 +0000 UTC m=+0.029884573 container create 9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f (image=quay.io/ceph/ceph:v19, name=pensive_swanson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.277486924 +0000 UTC m=+0.039061697 container create eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_benz, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:34:24 np0005591760 systemd[1]: Started libpod-conmon-9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f.scope.
Jan 22 04:34:24 np0005591760 systemd[1]: Started libpod-conmon-eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908.scope.
Jan 22 04:34:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b3d2339e475c9776e22ef2c3da3b81fa918df879bf17cce6ec17e18f9cacf7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b3d2339e475c9776e22ef2c3da3b81fa918df879bf17cce6ec17e18f9cacf7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b3d2339e475c9776e22ef2c3da3b81fa918df879bf17cce6ec17e18f9cacf7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.313553482 +0000 UTC m=+0.081263283 container init 9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f (image=quay.io/ceph/ceph:v19, name=pensive_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.31814289 +0000 UTC m=+0.085852682 container start 9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f (image=quay.io/ceph/ceph:v19, name=pensive_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.319485449 +0000 UTC m=+0.087195240 container attach 9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f (image=quay.io/ceph/ceph:v19, name=pensive_swanson, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.326268045 +0000 UTC m=+0.087842809 container init eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_benz, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.330067276 +0000 UTC m=+0.091642039 container start eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.331068881 +0000 UTC m=+0.092643645 container attach eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_benz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:24 np0005591760 adoring_benz[88313]: 167 167
Jan 22 04:34:24 np0005591760 systemd[1]: libpod-eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908.scope: Deactivated successfully.
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.333877598 +0000 UTC m=+0.095452371 container died eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.250655048 +0000 UTC m=+0.018364859 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.349091986 +0000 UTC m=+0.110666750 container remove eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:24 np0005591760 podman[88286]: 2026-01-22 09:34:24.257944079 +0000 UTC m=+0.019518862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:24 np0005591760 systemd[1]: libpod-conmon-eb7bb3bc4834e9cc7321aa3ad6ff77ec8e870b0c4d0660888857f025e1241908.scope: Deactivated successfully.
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring mgr.compute-0.rfmoog (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring daemon mgr.compute-0.rfmoog on compute-0
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Saving service node-exporter spec with placement *
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Saving service grafana spec with placement compute-0;count:1
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Saving service prometheus spec with placement compute-0;count:1
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Saving service alertmanager spec with placement compute-0;count:1
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring osd.0 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring daemon osd.0 on compute-0
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1083578268' entity='client.admin' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1823157614' entity='client.admin' 
Jan 22 04:34:24 np0005591760 systemd[1]: libpod-9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f.scope: Deactivated successfully.
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.61597938 +0000 UTC m=+0.383689171 container died 9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f (image=quay.io/ceph/ceph:v19, name=pensive_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:24 np0005591760 podman[88285]: 2026-01-22 09:34:24.634762294 +0000 UTC m=+0.402472086 container remove 9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f (image=quay.io/ceph/ceph:v19, name=pensive_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:34:24 np0005591760 systemd[1]: libpod-conmon-9e5f3c4c49e55243798291aea98b16fe2776cce7464170f90fb2e94c68e92d4f.scope: Deactivated successfully.
Jan 22 04:34:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e4b3d2339e475c9776e22ef2c3da3b81fa918df879bf17cce6ec17e18f9cacf7-merged.mount: Deactivated successfully.
Jan 22 04:34:24 np0005591760 python3[88391]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 22 04:34:24 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 22 04:34:24 np0005591760 podman[88392]: 2026-01-22 09:34:24.913105612 +0000 UTC m=+0.028347771 container create 4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab (image=quay.io/ceph/ceph:v19, name=ecstatic_mirzakhani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:34:24 np0005591760 systemd[1]: Started libpod-conmon-4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab.scope.
Jan 22 04:34:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56866eaa182932ed1a3654d9306c41310da709d3e6a7fac9672fd33f39d68b2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56866eaa182932ed1a3654d9306c41310da709d3e6a7fac9672fd33f39d68b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56866eaa182932ed1a3654d9306c41310da709d3e6a7fac9672fd33f39d68b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:24 np0005591760 podman[88392]: 2026-01-22 09:34:24.96368147 +0000 UTC m=+0.078923639 container init 4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab (image=quay.io/ceph/ceph:v19, name=ecstatic_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:34:24 np0005591760 podman[88392]: 2026-01-22 09:34:24.967577896 +0000 UTC m=+0.082820054 container start 4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab (image=quay.io/ceph/ceph:v19, name=ecstatic_mirzakhani, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:24 np0005591760 podman[88392]: 2026-01-22 09:34:24.968702161 +0000 UTC m=+0.083944320 container attach 4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab (image=quay.io/ceph/ceph:v19, name=ecstatic_mirzakhani, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:24 np0005591760 podman[88392]: 2026-01-22 09:34:24.90182064 +0000 UTC m=+0.017062819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4171560874' entity='client.admin' 
Jan 22 04:34:25 np0005591760 systemd[1]: libpod-4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab.scope: Deactivated successfully.
Jan 22 04:34:25 np0005591760 podman[88392]: 2026-01-22 09:34:25.245194433 +0000 UTC m=+0.360436602 container died 4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab (image=quay.io/ceph/ceph:v19, name=ecstatic_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:34:25 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a56866eaa182932ed1a3654d9306c41310da709d3e6a7fac9672fd33f39d68b2-merged.mount: Deactivated successfully.
Jan 22 04:34:25 np0005591760 podman[88392]: 2026-01-22 09:34:25.266332061 +0000 UTC m=+0.381574220 container remove 4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab (image=quay.io/ceph/ceph:v19, name=ecstatic_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:25 np0005591760 systemd[1]: libpod-conmon-4e079bafd809cf797cfe6335a86c081a0692e2f4ca61e20159d85f0b2a9705ab.scope: Deactivated successfully.
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1823157614' entity='client.admin' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: Reconfiguring osd.1 (monmap changed)...
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: Reconfiguring daemon osd.1 on compute-1
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/4171560874' entity='client.admin' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 04:34:25 np0005591760 python3[88463]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 04:34:25 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 04:34:26 np0005591760 python3[88498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.rfmoog/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.130461295 +0000 UTC m=+0.026998630 container create 9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378 (image=quay.io/ceph/ceph:v19, name=gracious_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:34:26 np0005591760 systemd[1]: Started libpod-conmon-9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378.scope.
Jan 22 04:34:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f7ab74b308dd8d09a58dcad145a746bcf7a0c6cda4fc35dfc3cb6f74db4477/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f7ab74b308dd8d09a58dcad145a746bcf7a0c6cda4fc35dfc3cb6f74db4477/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f7ab74b308dd8d09a58dcad145a746bcf7a0c6cda4fc35dfc3cb6f74db4477/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.182433723 +0000 UTC m=+0.078971089 container init 9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378 (image=quay.io/ceph/ceph:v19, name=gracious_germain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.186748765 +0000 UTC m=+0.083286100 container start 9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378 (image=quay.io/ceph/ceph:v19, name=gracious_germain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.188009458 +0000 UTC m=+0.084546794 container attach 9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378 (image=quay.io/ceph/ceph:v19, name=gracious_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.119745895 +0000 UTC m=+0.016283251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:26 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.bisona (monmap changed)...
Jan 22 04:34:26 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.bisona (monmap changed)...
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.bisona", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.bisona", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:26 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.bisona on compute-2
Jan 22 04:34:26 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.bisona on compute-2
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.rfmoog/server_addr}] v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/308023789' entity='client.admin' 
Jan 22 04:34:26 np0005591760 systemd[1]: libpod-9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378.scope: Deactivated successfully.
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.469851831 +0000 UTC m=+0.366389177 container died 9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378 (image=quay.io/ceph/ceph:v19, name=gracious_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e6f7ab74b308dd8d09a58dcad145a746bcf7a0c6cda4fc35dfc3cb6f74db4477-merged.mount: Deactivated successfully.
Jan 22 04:34:26 np0005591760 podman[88499]: 2026-01-22 09:34:26.487910834 +0000 UTC m=+0.384448179 container remove 9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378 (image=quay.io/ceph/ceph:v19, name=gracious_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:26 np0005591760 systemd[1]: libpod-conmon-9854f4ddd7ee592bfc73acfb8fb672e0e524e78eec273bbf765f7fcd4f378378.scope: Deactivated successfully.
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: Reconfiguring mgr.compute-2.bisona (monmap changed)...
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.bisona", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: Reconfiguring daemon mgr.compute-2.bisona on compute-2
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/308023789' entity='client.admin' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:26 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:27 np0005591760 python3[88571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.upcmhd/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.138834048 +0000 UTC m=+0.027463425 container create 73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:27 np0005591760 systemd[1]: Started libpod-conmon-73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28.scope.
Jan 22 04:34:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db187f0a7214efc1424eb9d1ef373cc78094ba4447072a7a8c3c7e5d369852e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db187f0a7214efc1424eb9d1ef373cc78094ba4447072a7a8c3c7e5d369852e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db187f0a7214efc1424eb9d1ef373cc78094ba4447072a7a8c3c7e5d369852e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.196712433 +0000 UTC m=+0.085341831 container init 73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.200865031 +0000 UTC m=+0.089494408 container start 73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.20481195 +0000 UTC m=+0.093441347 container attach 73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.127551022 +0000 UTC m=+0.016180418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.upcmhd/server_addr}] v 0)
Jan 22 04:34:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/285753853' entity='client.admin' 
Jan 22 04:34:27 np0005591760 systemd[1]: libpod-73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28.scope: Deactivated successfully.
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.492531936 +0000 UTC m=+0.381161313 container died 73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:34:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2db187f0a7214efc1424eb9d1ef373cc78094ba4447072a7a8c3c7e5d369852e-merged.mount: Deactivated successfully.
Jan 22 04:34:27 np0005591760 podman[88572]: 2026-01-22 09:34:27.510205463 +0000 UTC m=+0.398834840 container remove 73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:34:27 np0005591760 systemd[1]: libpod-conmon-73aa1f384360efb9204aff0ed13c0b85dad2d2e6df84731a4d58c0e51f48bc28.scope: Deactivated successfully.
Jan 22 04:34:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.74767845 +0000 UTC m=+0.028911831 container create e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:34:27 np0005591760 systemd[1]: Started libpod-conmon-e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da.scope.
Jan 22 04:34:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.801380293 +0000 UTC m=+0.082613685 container init e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.805931559 +0000 UTC m=+0.087164941 container start e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.807145283 +0000 UTC m=+0.088378666 container attach e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:27 np0005591760 hopeful_bhabha[88714]: 167 167
Jan 22 04:34:27 np0005591760 systemd[1]: libpod-e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da.scope: Deactivated successfully.
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.809490828 +0000 UTC m=+0.090724211 container died e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-88abb6b64b76807482c3256547c28dcc6ddfe57f648b2013512543956d2324d8-merged.mount: Deactivated successfully.
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.829324441 +0000 UTC m=+0.110557823 container remove e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:34:27 np0005591760 podman[88700]: 2026-01-22 09:34:27.735667982 +0000 UTC m=+0.016901384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:27 np0005591760 systemd[1]: libpod-conmon-e1349181f0672f8f54f8f198047b1f2c02b12767f07ca444275d253f599597da.scope: Deactivated successfully.
Jan 22 04:34:27 np0005591760 podman[88735]: 2026-01-22 09:34:27.940096909 +0000 UTC m=+0.027820096 container create bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:34:27 np0005591760 systemd[1]: Started libpod-conmon-bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6.scope.
Jan 22 04:34:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71058cdcb1ec86a6ee7d446253aa8fee6ccb47c62c4a776489295cc44c30e71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71058cdcb1ec86a6ee7d446253aa8fee6ccb47c62c4a776489295cc44c30e71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71058cdcb1ec86a6ee7d446253aa8fee6ccb47c62c4a776489295cc44c30e71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71058cdcb1ec86a6ee7d446253aa8fee6ccb47c62c4a776489295cc44c30e71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71058cdcb1ec86a6ee7d446253aa8fee6ccb47c62c4a776489295cc44c30e71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:27 np0005591760 podman[88735]: 2026-01-22 09:34:27.997981395 +0000 UTC m=+0.085704602 container init bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:28 np0005591760 podman[88735]: 2026-01-22 09:34:28.004230708 +0000 UTC m=+0.091953896 container start bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Jan 22 04:34:28 np0005591760 podman[88735]: 2026-01-22 09:34:28.005488255 +0000 UTC m=+0.093211443 container attach bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:34:28 np0005591760 podman[88735]: 2026-01-22 09:34:27.928810244 +0000 UTC m=+0.016533451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:28 np0005591760 python3[88778]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.bisona/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.211529919 +0000 UTC m=+0.026578160 container create 67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f (image=quay.io/ceph/ceph:v19, name=elegant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:28 np0005591760 systemd[1]: Started libpod-conmon-67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f.scope.
Jan 22 04:34:28 np0005591760 confident_chatelet[88748]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:34:28 np0005591760 confident_chatelet[88748]: --> All data devices are unavailable
Jan 22 04:34:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ca687b104f83f705727f77a80d3289a62cb6d709e796dbbe257071ca1822a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ca687b104f83f705727f77a80d3289a62cb6d709e796dbbe257071ca1822a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ca687b104f83f705727f77a80d3289a62cb6d709e796dbbe257071ca1822a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.264027906 +0000 UTC m=+0.079076137 container init 67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f (image=quay.io/ceph/ceph:v19, name=elegant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.268238451 +0000 UTC m=+0.083286682 container start 67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f (image=quay.io/ceph/ceph:v19, name=elegant_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.270021347 +0000 UTC m=+0.085069598 container attach 67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f (image=quay.io/ceph/ceph:v19, name=elegant_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:28 np0005591760 systemd[1]: libpod-bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6.scope: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88735]: 2026-01-22 09:34:28.281225847 +0000 UTC m=+0.368949035 container died bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:34:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b71058cdcb1ec86a6ee7d446253aa8fee6ccb47c62c4a776489295cc44c30e71-merged.mount: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.201159278 +0000 UTC m=+0.016207519 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:28 np0005591760 podman[88735]: 2026-01-22 09:34:28.301044833 +0000 UTC m=+0.388768020 container remove bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chatelet, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/285753853' entity='client.admin' 
Jan 22 04:34:28 np0005591760 systemd[1]: libpod-conmon-bcaf37b52e1e1190cff2cbacb03f3e9f588ee867f0f7f711782c5144b838e1d6.scope: Deactivated successfully.
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.bisona/server_addr}] v 0)
Jan 22 04:34:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/245233264' entity='client.admin' 
Jan 22 04:34:28 np0005591760 systemd[1]: libpod-67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f.scope: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.57043076 +0000 UTC m=+0.385478990 container died 67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f (image=quay.io/ceph/ceph:v19, name=elegant_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-071ca687b104f83f705727f77a80d3289a62cb6d709e796dbbe257071ca1822a-merged.mount: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88783]: 2026-01-22 09:34:28.598772777 +0000 UTC m=+0.413821009 container remove 67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f (image=quay.io/ceph/ceph:v19, name=elegant_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:28 np0005591760 systemd[1]: libpod-conmon-67445560b23bc2e53e5a80d193e296d6af03c153d705cfeafe49941ba37d870f.scope: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.704443132 +0000 UTC m=+0.026812260 container create 92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:34:28 np0005591760 systemd[1]: Started libpod-conmon-92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3.scope.
Jan 22 04:34:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.748540957 +0000 UTC m=+0.070910096 container init 92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_bassi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.754602646 +0000 UTC m=+0.076971775 container start 92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_bassi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.755894619 +0000 UTC m=+0.078263768 container attach 92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:34:28 np0005591760 ecstatic_bassi[88953]: 167 167
Jan 22 04:34:28 np0005591760 systemd[1]: libpod-92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3.scope: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.758019878 +0000 UTC m=+0.080389027 container died 92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:34:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4008f0ba7d33f9ac7f58da9ba9003a3768382924dc776cf303b068f9b331e931-merged.mount: Deactivated successfully.
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.776602878 +0000 UTC m=+0.098972006 container remove 92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_bassi, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:28 np0005591760 podman[88927]: 2026-01-22 09:34:28.693143852 +0000 UTC m=+0.015513002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:28 np0005591760 systemd[1]: libpod-conmon-92b25cf0f8eab208e430a014d388474042b61168d3e8efc3bb3d248905fa91b3.scope: Deactivated successfully.
Jan 22 04:34:28 np0005591760 python3[88969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:28 np0005591760 podman[88986]: 2026-01-22 09:34:28.890694903 +0000 UTC m=+0.027756185 container create 770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:34:28 np0005591760 podman[88994]: 2026-01-22 09:34:28.915235908 +0000 UTC m=+0.029706759 container create 3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b (image=quay.io/ceph/ceph:v19, name=gifted_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:28 np0005591760 systemd[1]: Started libpod-conmon-770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b.scope.
Jan 22 04:34:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:28 np0005591760 systemd[1]: Started libpod-conmon-3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b.scope.
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e46d677b73fbbfe56ff25b8f898263be74ff031f2ef0a704dc7ef2af57f359c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e46d677b73fbbfe56ff25b8f898263be74ff031f2ef0a704dc7ef2af57f359c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e46d677b73fbbfe56ff25b8f898263be74ff031f2ef0a704dc7ef2af57f359c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e46d677b73fbbfe56ff25b8f898263be74ff031f2ef0a704dc7ef2af57f359c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 podman[88986]: 2026-01-22 09:34:28.947516178 +0000 UTC m=+0.084577471 container init 770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c86412007f11215a57c82747d37f988b5eee61148fe747f62a2ab7b5e9be12/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c86412007f11215a57c82747d37f988b5eee61148fe747f62a2ab7b5e9be12/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c86412007f11215a57c82747d37f988b5eee61148fe747f62a2ab7b5e9be12/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:28 np0005591760 podman[88986]: 2026-01-22 09:34:28.956200514 +0000 UTC m=+0.093261787 container start 770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:28 np0005591760 podman[88986]: 2026-01-22 09:34:28.957463973 +0000 UTC m=+0.094525245 container attach 770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:28 np0005591760 podman[88994]: 2026-01-22 09:34:28.959094001 +0000 UTC m=+0.073564873 container init 3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b (image=quay.io/ceph/ceph:v19, name=gifted_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:28 np0005591760 podman[88994]: 2026-01-22 09:34:28.962931073 +0000 UTC m=+0.077401926 container start 3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b (image=quay.io/ceph/ceph:v19, name=gifted_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:34:28 np0005591760 podman[88994]: 2026-01-22 09:34:28.964221994 +0000 UTC m=+0.078692846 container attach 3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b (image=quay.io/ceph/ceph:v19, name=gifted_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:34:28 np0005591760 podman[88986]: 2026-01-22 09:34:28.879626379 +0000 UTC m=+0.016687671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:29 np0005591760 podman[88994]: 2026-01-22 09:34:28.904832655 +0000 UTC m=+0.019303527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]: {
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:    "0": [
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:        {
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "devices": [
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "/dev/loop3"
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            ],
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "lv_name": "ceph_lv0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "lv_size": "21470642176",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "name": "ceph_lv0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "tags": {
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.cluster_name": "ceph",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.crush_device_class": "",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.encrypted": "0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.osd_id": "0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.type": "block",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.vdo": "0",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:                "ceph.with_tpm": "0"
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            },
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "type": "block",
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:            "vg_name": "ceph_vg0"
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:        }
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]:    ]
Jan 22 04:34:29 np0005591760 interesting_hellman[89009]: }
Jan 22 04:34:29 np0005591760 systemd[1]: libpod-770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b.scope: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[89044]: 2026-01-22 09:34:29.235204796 +0000 UTC m=+0.027809706 container died 770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:34:29 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8e46d677b73fbbfe56ff25b8f898263be74ff031f2ef0a704dc7ef2af57f359c-merged.mount: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[89044]: 2026-01-22 09:34:29.252883299 +0000 UTC m=+0.045488199 container remove 770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:34:29 np0005591760 systemd[1]: libpod-conmon-770fa0b654ff3901ba99dd35759c7547380fa4c6cd84d8494f476c3280bca16b.scope: Deactivated successfully.
Jan 22 04:34:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 22 04:34:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1610688415' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 22 04:34:29 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/245233264' entity='client.admin' 
Jan 22 04:34:29 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1610688415' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 22 04:34:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1610688415' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 22 04:34:29 np0005591760 gifted_darwin[89014]: module 'dashboard' is already disabled
Jan 22 04:34:29 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.rfmoog(active, since 116s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:34:29 np0005591760 systemd[1]: libpod-3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b.scope: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[88994]: 2026-01-22 09:34:29.577490434 +0000 UTC m=+0.691961286 container died 3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b (image=quay.io/ceph/ceph:v19, name=gifted_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:29 np0005591760 systemd[1]: var-lib-containers-storage-overlay-65c86412007f11215a57c82747d37f988b5eee61148fe747f62a2ab7b5e9be12-merged.mount: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[88994]: 2026-01-22 09:34:29.598390957 +0000 UTC m=+0.712861809 container remove 3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b (image=quay.io/ceph/ceph:v19, name=gifted_darwin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:34:29 np0005591760 systemd[1]: libpod-conmon-3ee411a1bb9930492c45ebc56c2821b735d9c341850b2348dd3901b42c70228b.scope: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.658768285 +0000 UTC m=+0.027341552 container create 6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:34:29 np0005591760 systemd[1]: Started libpod-conmon-6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077.scope.
Jan 22 04:34:29 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.71573224 +0000 UTC m=+0.084305537 container init 6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.72050333 +0000 UTC m=+0.089076597 container start 6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.722622647 +0000 UTC m=+0.091195934 container attach 6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:29 np0005591760 youthful_mclean[89161]: 167 167
Jan 22 04:34:29 np0005591760 systemd[1]: libpod-6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077.scope: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.723662743 +0000 UTC m=+0.092236011 container died 6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:34:29 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ec86f16dd2884ce118e6aa1becd1b35629d4a042b13b63ccb7ee99990fa085f2-merged.mount: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.744307515 +0000 UTC m=+0.112880781 container remove 6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:34:29 np0005591760 podman[89148]: 2026-01-22 09:34:29.647923482 +0000 UTC m=+0.016496760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:29 np0005591760 systemd[1]: libpod-conmon-6b3b3c5f5f287227fc30a1e40b5fb81f38acdf4acfb4ed3d6baf0809c067d077.scope: Deactivated successfully.
Jan 22 04:34:29 np0005591760 podman[89208]: 2026-01-22 09:34:29.857150262 +0000 UTC m=+0.027391427 container create b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:29 np0005591760 python3[89202]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:29 np0005591760 systemd[1]: Started libpod-conmon-b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335.scope.
Jan 22 04:34:29 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf02f326df0538404a1ab3508773fb2c12f2af4f339be3016e149031abf2b72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf02f326df0538404a1ab3508773fb2c12f2af4f339be3016e149031abf2b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf02f326df0538404a1ab3508773fb2c12f2af4f339be3016e149031abf2b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf02f326df0538404a1ab3508773fb2c12f2af4f339be3016e149031abf2b72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 podman[89208]: 2026-01-22 09:34:29.911890482 +0000 UTC m=+0.082131657 container init b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:29 np0005591760 podman[89222]: 2026-01-22 09:34:29.913261114 +0000 UTC m=+0.031848053 container create 9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693 (image=quay.io/ceph/ceph:v19, name=vibrant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:34:29 np0005591760 podman[89208]: 2026-01-22 09:34:29.917868674 +0000 UTC m=+0.088109829 container start b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:29 np0005591760 podman[89208]: 2026-01-22 09:34:29.918917347 +0000 UTC m=+0.089158522 container attach b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:29 np0005591760 systemd[1]: Started libpod-conmon-9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693.scope.
Jan 22 04:34:29 np0005591760 podman[89208]: 2026-01-22 09:34:29.846008628 +0000 UTC m=+0.016249803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:29 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6116971db66bea95488ded32a9807831cd3118e145febe0c901fa01322ae926/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6116971db66bea95488ded32a9807831cd3118e145febe0c901fa01322ae926/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6116971db66bea95488ded32a9807831cd3118e145febe0c901fa01322ae926/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:29 np0005591760 podman[89222]: 2026-01-22 09:34:29.967596969 +0000 UTC m=+0.086183918 container init 9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693 (image=quay.io/ceph/ceph:v19, name=vibrant_bohr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:29 np0005591760 podman[89222]: 2026-01-22 09:34:29.972527771 +0000 UTC m=+0.091114710 container start 9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693 (image=quay.io/ceph/ceph:v19, name=vibrant_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:34:29 np0005591760 podman[89222]: 2026-01-22 09:34:29.973658348 +0000 UTC m=+0.092245288 container attach 9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693 (image=quay.io/ceph/ceph:v19, name=vibrant_bohr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:34:29 np0005591760 podman[89222]: 2026-01-22 09:34:29.901002368 +0000 UTC m=+0.019589327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3348958984' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 22 04:34:30 np0005591760 lvm[89333]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:34:30 np0005591760 lvm[89333]: VG ceph_vg0 finished
Jan 22 04:34:30 np0005591760 focused_nobel[89228]: {}
Jan 22 04:34:30 np0005591760 systemd[1]: libpod-b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 conmon[89228]: conmon b02f2ae7d1bf638aeec6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335.scope/container/memory.events
Jan 22 04:34:30 np0005591760 podman[89208]: 2026-01-22 09:34:30.444972483 +0000 UTC m=+0.615213629 container died b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nobel, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-acf02f326df0538404a1ab3508773fb2c12f2af4f339be3016e149031abf2b72-merged.mount: Deactivated successfully.
Jan 22 04:34:30 np0005591760 podman[89208]: 2026-01-22 09:34:30.469611335 +0000 UTC m=+0.639852491 container remove b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 04:34:30 np0005591760 systemd[1]: libpod-conmon-b02f2ae7d1bf638aeec6b7c89c76d135f83cfb8fc922b89cc208a8bad22df335.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 8511f2da-7e2c-427d-babd-f58390d9ede2 (Updating rgw.rgw deployment (+3 -> 3))
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.aqqfbf on compute-2
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.aqqfbf on compute-2
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1610688415' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3348958984' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: from='mgr.14122 192.168.122.100:0/608607566' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: Deploying daemon rgw.rgw.compute-2.aqqfbf on compute-2
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3348958984' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 22 04:34:30 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.rfmoog(active, since 117s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:34:30 np0005591760 systemd[1]: libpod-9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 podman[89222]: 2026-01-22 09:34:30.584036405 +0000 UTC m=+0.702623343 container died 9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693 (image=quay.io/ceph/ceph:v19, name=vibrant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 04:34:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c6116971db66bea95488ded32a9807831cd3118e145febe0c901fa01322ae926-merged.mount: Deactivated successfully.
Jan 22 04:34:30 np0005591760 podman[89222]: 2026-01-22 09:34:30.6027055 +0000 UTC m=+0.721292439 container remove 9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693 (image=quay.io/ceph/ceph:v19, name=vibrant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:30 np0005591760 systemd[1]: libpod-conmon-9cf115cfb53fe03bc66b6762f7024f2ead5c236fda017c96fe56e5214ba9a693.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-33.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-33.scope: Consumed 20.007s CPU time.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 33 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd[1]: session-32.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 33.
Jan 22 04:34:30 np0005591760 systemd[1]: session-30.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-27.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-31.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 32 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd[1]: session-26.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-28.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 31 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 30 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 26 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 28 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 27 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd[1]: session-21.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-29.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 21 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 29 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 25 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd[1]: session-25.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd[1]: session-24.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 32.
Jan 22 04:34:30 np0005591760 systemd[1]: session-23.scope: Deactivated successfully.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 24 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Session 23 logged out. Waiting for processes to exit.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 30.
Jan 22 04:34:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setuser ceph since I am not root
Jan 22 04:34:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setgroup ceph since I am not root
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 27.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 31.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 26.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 28.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 21.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 29.
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 25.
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 24.
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: pidfile_write: ignore empty --pid-file
Jan 22 04:34:30 np0005591760 systemd-logind[747]: Removed session 23.
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'alerts'
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'balancer'
Jan 22 04:34:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:30.781+0000 7f2cc5c17140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:34:30 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'cephadm'
Jan 22 04:34:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:30.853+0000 7f2cc5c17140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:34:30 np0005591760 python3[89401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:31 np0005591760 podman[89402]: 2026-01-22 09:34:31.017419038 +0000 UTC m=+0.029506466 container create 8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a (image=quay.io/ceph/ceph:v19, name=busy_chatelet, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:31 np0005591760 systemd[1]: Started libpod-conmon-8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a.scope.
Jan 22 04:34:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0d7d703825cce5d701f55a3b59917bf7861a79faad2563968b8570ed86e83f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0d7d703825cce5d701f55a3b59917bf7861a79faad2563968b8570ed86e83f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0d7d703825cce5d701f55a3b59917bf7861a79faad2563968b8570ed86e83f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:31 np0005591760 podman[89402]: 2026-01-22 09:34:31.070613555 +0000 UTC m=+0.082700994 container init 8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a (image=quay.io/ceph/ceph:v19, name=busy_chatelet, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:31 np0005591760 podman[89402]: 2026-01-22 09:34:31.075731833 +0000 UTC m=+0.087819260 container start 8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a (image=quay.io/ceph/ceph:v19, name=busy_chatelet, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:31 np0005591760 podman[89402]: 2026-01-22 09:34:31.076837993 +0000 UTC m=+0.088925431 container attach 8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a (image=quay.io/ceph/ceph:v19, name=busy_chatelet, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:34:31 np0005591760 podman[89402]: 2026-01-22 09:34:31.005060433 +0000 UTC m=+0.017147880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'crash'
Jan 22 04:34:31 np0005591760 ceph-mgr[74522]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:34:31 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'dashboard'
Jan 22 04:34:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:31.525+0000 7f2cc5c17140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:34:31 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3348958984' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'devicehealth'
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:32.063+0000 7f2cc5c17140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  from numpy import show_config as show_numpy_config
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:32.203+0000 7f2cc5c17140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'influx'
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:32.264+0000 7f2cc5c17140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'insights'
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'iostat'
Jan 22 04:34:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:32.384+0000 7f2cc5c17140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'k8sevents'
Jan 22 04:34:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 22 04:34:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 22 04:34:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 22 04:34:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 22 04:34:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'localpool'
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 04:34:32 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 28 pg[8.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [0] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:32 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mirroring'
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'nfs'
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:33.245+0000 7f2cc5c17140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'orchestrator'
Jan 22 04:34:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:33.437+0000 7f2cc5c17140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 04:34:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:33.503+0000 7f2cc5c17140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_support'
Jan 22 04:34:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:33.562+0000 7f2cc5c17140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/2506501459' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 22 04:34:33 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 22 04:34:33 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 29 pg[8.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [0] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:33.639+0000 7f2cc5c17140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'progress'
Jan 22 04:34:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:33.702+0000 7f2cc5c17140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:34:33 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'prometheus'
Jan 22 04:34:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:34.000+0000 7f2cc5c17140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rbd_support'
Jan 22 04:34:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:34.088+0000 7f2cc5c17140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'restful'
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rgw'
Jan 22 04:34:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:34.464+0000 7f2cc5c17140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rook'
Jan 22 04:34:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 22 04:34:34 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 04:34:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 22 04:34:34 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 22 04:34:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 22 04:34:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 04:34:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:34.941+0000 7f2cc5c17140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:34:34 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'selftest'
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.002+0000 7f2cc5c17140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'snap_schedule'
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.071+0000 7f2cc5c17140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'stats'
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'status'
Jan 22 04:34:35 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 30 pg[9.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [0] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.197+0000 7f2cc5c17140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telegraf'
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.257+0000 7f2cc5c17140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telemetry'
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.389+0000 7f2cc5c17140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.577+0000 7f2cc5c17140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'volumes'
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/3366931585' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 04:34:35 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 31 pg[9.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [0] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.803+0000 7f2cc5c17140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'zabbix'
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd restarted
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd started
Jan 22 04:34:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:35.864+0000 7f2cc5c17140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rfmoog restarted
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rfmoog
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: ms_deliver_dispatch: unhandled message 0x55b47489b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map Activating!
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map I am now activating
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.rfmoog(active, starting, since 0.0160707s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: balancer
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [balancer INFO root] Starting
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Manager daemon compute-0.rfmoog is now available
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:34:35
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: cephadm
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: crash
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: dashboard
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: devicehealth
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Starting
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [dashboard INFO sso] Loading SSO DB version=1
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: iostat
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: nfs
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: orchestrator
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: pg_autoscaler
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: progress
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona restarted
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona started
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [progress INFO root] Loading...
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f2c63772bb0>, <progress.module.GhostEvent object at 0x7f2c6370a850>, <progress.module.GhostEvent object at 0x7f2c6370a880>, <progress.module.GhostEvent object at 0x7f2c6370a8b0>, <progress.module.GhostEvent object at 0x7f2c6370a8e0>] historic events
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] recovery thread starting
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] starting setup
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: rbd_support
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: restful
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"} v 0)
Jan 22 04:34:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: status
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: telemetry
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [restful WARNING root] server not running: no certificate configured
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:34:35 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] PerfHandler: starting
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TaskHandler: starting
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"} v 0)
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] setup complete
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: volumes
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 22 04:34:36 np0005591760 systemd-logind[747]: New session 34 of user ceph-admin.
Jan 22 04:34:36 np0005591760 systemd[1]: Started Session 34 of User ceph-admin.
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.module] Engine started.
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: Active manager daemon compute-0.rfmoog restarted
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: Activating manager daemon compute-0.rfmoog
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/3366931585' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: Manager daemon compute-0.rfmoog is now available
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='mgr.14385 192.168.122.100:0/392909353' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: from='mgr.14385 192.168.122.100:0/392909353' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:34:36 np0005591760 podman[89692]: 2026-01-22 09:34:36.836280706 +0000 UTC m=+0.038439583 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.rfmoog(active, since 1.02138s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24161 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v3: 10 pgs: 1 creating+peering, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Jan 22 04:34:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:36 np0005591760 busy_chatelet[89414]: Option GRAFANA_API_USERNAME updated
Jan 22 04:34:36 np0005591760 podman[89692]: 2026-01-22 09:34:36.93057751 +0000 UTC m=+0.132736389 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:34:36 np0005591760 podman[89402]: 2026-01-22 09:34:36.932571069 +0000 UTC m=+5.944658497 container died 8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a (image=quay.io/ceph/ceph:v19, name=busy_chatelet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Jan 22 04:34:36 np0005591760 systemd[1]: libpod-8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a.scope: Deactivated successfully.
Jan 22 04:34:36 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5e0d7d703825cce5d701f55a3b59917bf7861a79faad2563968b8570ed86e83f-merged.mount: Deactivated successfully.
Jan 22 04:34:36 np0005591760 podman[89402]: 2026-01-22 09:34:36.96163646 +0000 UTC m=+5.973723887 container remove 8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a (image=quay.io/ceph/ceph:v19, name=busy_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:34:36 np0005591760 systemd[75491]: Starting Mark boot as successful...
Jan 22 04:34:36 np0005591760 systemd[75491]: Finished Mark boot as successful.
Jan 22 04:34:36 np0005591760 systemd[1]: libpod-conmon-8c678a57a73ff11ae4c986e283739976a89770d4a1097597d0f2f62f21ca468a.scope: Deactivated successfully.
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:37] ENGINE Bus STARTING
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:37] ENGINE Bus STARTING
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:37] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:37] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 python3[89809]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:37] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:37] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:37] ENGINE Bus STARTED
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:37] ENGINE Bus STARTED
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:37] ENGINE Client ('192.168.122.100', 40276) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:37] ENGINE Client ('192.168.122.100', 40276) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.296359458 +0000 UTC m=+0.028345100 container create 5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15 (image=quay.io/ceph/ceph:v19, name=cranky_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:34:37 np0005591760 systemd[1]: Started libpod-conmon-5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15.scope.
Jan 22 04:34:37 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dedf3137d629ce234546121f9353cc35991d45263e4d3dbebf505719e8bb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dedf3137d629ce234546121f9353cc35991d45263e4d3dbebf505719e8bb1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746dedf3137d629ce234546121f9353cc35991d45263e4d3dbebf505719e8bb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.348756269 +0000 UTC m=+0.080741920 container init 5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15 (image=quay.io/ceph/ceph:v19, name=cranky_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.354651063 +0000 UTC m=+0.086636705 container start 5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15 (image=quay.io/ceph/ceph:v19, name=cranky_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.355799424 +0000 UTC m=+0.087785066 container attach 5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15 (image=quay.io/ceph/ceph:v19, name=cranky_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.285284 +0000 UTC m=+0.017269651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14427 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 cranky_jemison[89888]: Option GRAFANA_API_PASSWORD updated
Jan 22 04:34:37 np0005591760 systemd[1]: libpod-5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15.scope: Deactivated successfully.
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.65763069 +0000 UTC m=+0.389616330 container died 5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15 (image=quay.io/ceph/ceph:v19, name=cranky_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:37 np0005591760 systemd[1]: var-lib-containers-storage-overlay-746dedf3137d629ce234546121f9353cc35991d45263e4d3dbebf505719e8bb1-merged.mount: Deactivated successfully.
Jan 22 04:34:37 np0005591760 podman[89863]: 2026-01-22 09:34:37.680393594 +0000 UTC m=+0.412379236 container remove 5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15 (image=quay.io/ceph/ceph:v19, name=cranky_jemison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:34:37 np0005591760 systemd[1]: libpod-conmon-5fc1963a3d3744864081efec4946679dd6583542c039173488f420ad7b8d3b15.scope: Deactivated successfully.
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:37] ENGINE Bus STARTING
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:37] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:37] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:37] ENGINE Bus STARTED
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:37] ENGINE Client ('192.168.122.100', 40276) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: from='mgr.14385 192.168.122.100:0/392909353' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v5: 10 pgs: 1 creating+peering, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 22 04:34:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 34 pg[11.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.7M
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.7M
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:37 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 04:34:37 np0005591760 python3[90026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.7M
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.7M
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.028771429 +0000 UTC m=+0.028534888 container create bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b (image=quay.io/ceph/ceph:v19, name=recursing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:34:38 np0005591760 systemd[1]: Started libpod-conmon-bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b.scope.
Jan 22 04:34:38 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31d1a904de1ca1b0c051c92c53d97f85628cf6166c9d5cba84b16792aa1ead8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31d1a904de1ca1b0c051c92c53d97f85628cf6166c9d5cba84b16792aa1ead8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31d1a904de1ca1b0c051c92c53d97f85628cf6166c9d5cba84b16792aa1ead8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.084047903 +0000 UTC m=+0.083811382 container init bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b (image=quay.io/ceph/ceph:v19, name=recursing_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.088529657 +0000 UTC m=+0.088293115 container start bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b (image=quay.io/ceph/ceph:v19, name=recursing_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.08967394 +0000 UTC m=+0.089437398 container attach bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b (image=quay.io/ceph/ceph:v19, name=recursing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.018105223 +0000 UTC m=+0.017868703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24217 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 recursing_rhodes[90088]: Option ALERTMANAGER_API_HOST updated
Jan 22 04:34:38 np0005591760 systemd[1]: libpod-bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b.scope: Deactivated successfully.
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.381239286 +0000 UTC m=+0.381002765 container died bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b (image=quay.io/ceph/ceph:v19, name=recursing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:38 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d31d1a904de1ca1b0c051c92c53d97f85628cf6166c9d5cba84b16792aa1ead8-merged.mount: Deactivated successfully.
Jan 22 04:34:38 np0005591760 podman[90053]: 2026-01-22 09:34:38.405376008 +0000 UTC m=+0.405139467 container remove bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b (image=quay.io/ceph/ceph:v19, name=recursing_rhodes, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:34:38 np0005591760 systemd[1]: libpod-conmon-bc98c1e6d9c4dff7be709bd2befe50703232624f6025254b312f1f10fc37c70b.scope: Deactivated successfully.
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 python3[90382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 podman[90452]: 2026-01-22 09:34:38.699444117 +0000 UTC m=+0.027711380 container create 7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9 (image=quay.io/ceph/ceph:v19, name=infallible_bouman, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:34:38 np0005591760 systemd[1]: Started libpod-conmon-7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9.scope.
Jan 22 04:34:38 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddaaca871dceb24c2a0e35d747d08a1bcb71c33a16e65798304c26581e55db82/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddaaca871dceb24c2a0e35d747d08a1bcb71c33a16e65798304c26581e55db82/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddaaca871dceb24c2a0e35d747d08a1bcb71c33a16e65798304c26581e55db82/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:38 np0005591760 podman[90452]: 2026-01-22 09:34:38.755173707 +0000 UTC m=+0.083440992 container init 7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9 (image=quay.io/ceph/ceph:v19, name=infallible_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:38 np0005591760 podman[90452]: 2026-01-22 09:34:38.764848899 +0000 UTC m=+0.093116163 container start 7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9 (image=quay.io/ceph/ceph:v19, name=infallible_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:38 np0005591760 podman[90452]: 2026-01-22 09:34:38.766724425 +0000 UTC m=+0.094991680 container attach 7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9 (image=quay.io/ceph/ceph:v19, name=infallible_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:34:38 np0005591760 podman[90452]: 2026-01-22 09:34:38.688268268 +0000 UTC m=+0.016535551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.rfmoog(active, since 3s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/3366931585' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 192.168.122.100:0/392909353' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Adjusting osd_memory_target on compute-1 to 128.7M
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Unable to set osd_memory_target on compute-1 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 192.168.122.100:0/392909353' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Adjusting osd_memory_target on compute-0 to 128.7M
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Unable to set osd_memory_target on compute-0 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 192.168.122.100:0/392909353' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 22 04:34:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 04:34:38 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 35 pg[11.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 infallible_bouman[90492]: Option PROMETHEUS_API_HOST updated
Jan 22 04:34:39 np0005591760 systemd[1]: libpod-7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9.scope: Deactivated successfully.
Jan 22 04:34:39 np0005591760 podman[90452]: 2026-01-22 09:34:39.086087525 +0000 UTC m=+0.414354799 container died 7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9 (image=quay.io/ceph/ceph:v19, name=infallible_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:39 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ddaaca871dceb24c2a0e35d747d08a1bcb71c33a16e65798304c26581e55db82-merged.mount: Deactivated successfully.
Jan 22 04:34:39 np0005591760 podman[90452]: 2026-01-22 09:34:39.106309876 +0000 UTC m=+0.434577141 container remove 7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9 (image=quay.io/ceph/ceph:v19, name=infallible_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:34:39 np0005591760 systemd[1]: libpod-conmon-7bd4d07451145b5f5cb99c1ca2f7c699949c518d330f41deac0c764a1ad5eae9.scope: Deactivated successfully.
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 python3[90865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.41970313 +0000 UTC m=+0.030531894 container create bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c (image=quay.io/ceph/ceph:v19, name=thirsty_edison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:34:39 np0005591760 systemd[1]: Started libpod-conmon-bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c.scope.
Jan 22 04:34:39 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5089d320c771633874261fe8b08f52171493da4449ca968b092e87abdc5904/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5089d320c771633874261fe8b08f52171493da4449ca968b092e87abdc5904/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5089d320c771633874261fe8b08f52171493da4449ca968b092e87abdc5904/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.471541014 +0000 UTC m=+0.082369798 container init bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c (image=quay.io/ceph/ceph:v19, name=thirsty_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.477186698 +0000 UTC m=+0.088015461 container start bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c (image=quay.io/ceph/ceph:v19, name=thirsty_edison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.478277659 +0000 UTC m=+0.089106424 container attach bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c (image=quay.io/ceph/ceph:v19, name=thirsty_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.406386013 +0000 UTC m=+0.017214797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev f7d80420-0301-41f3-9527-7dcd52778f29 (Updating node-exporter deployment (+3 -> 3))
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 thirsty_edison[90979]: Option GRAFANA_API_URL updated
Jan 22 04:34:39 np0005591760 systemd[1]: libpod-bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c.scope: Deactivated successfully.
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.779135756 +0000 UTC m=+0.389964519 container died bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c (image=quay.io/ceph/ceph:v19, name=thirsty_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:39 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fa5089d320c771633874261fe8b08f52171493da4449ca968b092e87abdc5904-merged.mount: Deactivated successfully.
Jan 22 04:34:39 np0005591760 podman[90941]: 2026-01-22 09:34:39.797601696 +0000 UTC m=+0.408430460 container remove bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c (image=quay.io/ceph/ceph:v19, name=thirsty_edison, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:39 np0005591760 systemd[1]: libpod-conmon-bf1b3be8d885712673524fe56dec081104dc9a2743b98d855d228a4de734fc3c.scope: Deactivated successfully.
Jan 22 04:34:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v8: 11 pgs: 1 unknown, 1 creating+peering, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.102:0/3366931585' entity='client.rgw.rgw.compute-2.aqqfbf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: from='mgr.14385 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 22 04:34:39 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 22 04:34:40 np0005591760 python3[91213]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:40 np0005591760 systemd[1]: Reloading.
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.109160762 +0000 UTC m=+0.039865389 container create 20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6 (image=quay.io/ceph/ceph:v19, name=ecstatic_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:34:40 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:34:40 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.094426646 +0000 UTC m=+0.025131273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:40 np0005591760 systemd[1]: Started libpod-conmon-20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6.scope.
Jan 22 04:34:40 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d31a7dcce1e8b19cafd16caa1e367798df2107a077e3ee735d7eba21a022cbf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d31a7dcce1e8b19cafd16caa1e367798df2107a077e3ee735d7eba21a022cbf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d31a7dcce1e8b19cafd16caa1e367798df2107a077e3ee735d7eba21a022cbf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:40 np0005591760 systemd[1]: Reloading.
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.341775228 +0000 UTC m=+0.272479875 container init 20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6 (image=quay.io/ceph/ceph:v19, name=ecstatic_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.351587308 +0000 UTC m=+0.282291935 container start 20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6 (image=quay.io/ceph/ceph:v19, name=ecstatic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.353203453 +0000 UTC m=+0.283908080 container attach 20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6 (image=quay.io/ceph/ceph:v19, name=ecstatic_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:40 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:34:40 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:34:40 np0005591760 systemd[1]: Starting Ceph node-exporter.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:34:40 np0005591760 bash[91398]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3397378284' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3397378284' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.rfmoog(active, since 4s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:40 np0005591760 systemd[1]: libpod-20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6.scope: Deactivated successfully.
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.78327719 +0000 UTC m=+0.713981817 container died 20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6 (image=quay.io/ceph/ceph:v19, name=ecstatic_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:34:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6d31a7dcce1e8b19cafd16caa1e367798df2107a077e3ee735d7eba21a022cbf-merged.mount: Deactivated successfully.
Jan 22 04:34:40 np0005591760 podman[91244]: 2026-01-22 09:34:40.812539031 +0000 UTC m=+0.743243658 container remove 20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6 (image=quay.io/ceph/ceph:v19, name=ecstatic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:34:40 np0005591760 systemd[1]: libpod-conmon-20e57be332a1931f45730336857d973bcde47e358101a5a5e6116292cd7450a6.scope: Deactivated successfully.
Jan 22 04:34:40 np0005591760 systemd-logind[747]: Session 34 logged out. Waiting for processes to exit.
Jan 22 04:34:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setuser ceph since I am not root
Jan 22 04:34:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setgroup ceph since I am not root
Jan 22 04:34:40 np0005591760 ceph-mgr[74522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 22 04:34:40 np0005591760 ceph-mgr[74522]: pidfile_write: ignore empty --pid-file
Jan 22 04:34:40 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'alerts'
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: Deploying daemon node-exporter.compute-0 on compute-0
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.rgw.rgw.compute-2.aqqfbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3397378284' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 22 04:34:40 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3397378284' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 22 04:34:40 np0005591760 ceph-mgr[74522]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:34:40 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'balancer'
Jan 22 04:34:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:40.971+0000 7fd14b8bb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:34:41 np0005591760 ceph-mgr[74522]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:34:41 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'cephadm'
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:41.043+0000 7fd14b8bb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:34:41 np0005591760 python3[91463]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.141446316 +0000 UTC m=+0.039321793 container create 7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723 (image=quay.io/ceph/ceph:v19, name=lucid_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:41 np0005591760 systemd[1]: Started libpod-conmon-7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723.scope.
Jan 22 04:34:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca3291637d68f65966d57fa1dfdace9e52be897e708af8354acd239539e8672/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca3291637d68f65966d57fa1dfdace9e52be897e708af8354acd239539e8672/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca3291637d68f65966d57fa1dfdace9e52be897e708af8354acd239539e8672/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.207259911 +0000 UTC m=+0.105135397 container init 7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723 (image=quay.io/ceph/ceph:v19, name=lucid_nobel, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:41 np0005591760 bash[91398]: Getting image source signatures
Jan 22 04:34:41 np0005591760 bash[91398]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Jan 22 04:34:41 np0005591760 bash[91398]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Jan 22 04:34:41 np0005591760 bash[91398]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.220844965 +0000 UTC m=+0.118720441 container start 7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723 (image=quay.io/ceph/ceph:v19, name=lucid_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.22237136 +0000 UTC m=+0.120246837 container attach 7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723 (image=quay.io/ceph/ceph:v19, name=lucid_nobel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.12641058 +0000 UTC m=+0.024286076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 22 04:34:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 22 04:34:41 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'crash'
Jan 22 04:34:41 np0005591760 ceph-mgr[74522]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:34:41 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'dashboard'
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:41.746+0000 7fd14b8bb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:34:41 np0005591760 bash[91398]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Jan 22 04:34:41 np0005591760 bash[91398]: Writing manifest to image destination
Jan 22 04:34:41 np0005591760 podman[91398]: 2026-01-22 09:34:41.818269652 +0000 UTC m=+1.172237906 container create 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:34:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e406c006b115af27979891aa34b688c048909e5d99ce9707efc38daf5cb2c46/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:41 np0005591760 podman[91398]: 2026-01-22 09:34:41.861173765 +0000 UTC m=+1.215142009 container init 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:34:41 np0005591760 podman[91398]: 2026-01-22 09:34:41.808661688 +0000 UTC m=+1.162629952 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 22 04:34:41 np0005591760 podman[91398]: 2026-01-22 09:34:41.865615823 +0000 UTC m=+1.219584067 container start 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:34:41 np0005591760 bash[91398]: 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.871Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 22 04:34:41 np0005591760 systemd[1]: Started Ceph node-exporter.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.872Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.872Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.872Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.872Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.872Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=arp
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=bcache
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=bonding
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=cpu
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=dmi
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=edac
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=entropy
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=filefd
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=netclass
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=netdev
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=netstat
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=nfs
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=nvme
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=os
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=pressure
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=rapl
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=selinux
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=softnet
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=stat
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=textfile
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=time
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=uname
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=xfs
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.873Z caller=node_exporter.go:117 level=info collector=zfs
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.874Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 22 04:34:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[91564]: ts=2026-01-22T09:34:41.874Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 22 04:34:41 np0005591760 systemd[1]: session-34.scope: Deactivated successfully.
Jan 22 04:34:41 np0005591760 systemd[1]: session-34.scope: Consumed 3.452s CPU time.
Jan 22 04:34:41 np0005591760 systemd-logind[747]: Removed session 34.
Jan 22 04:34:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 22 04:34:41 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.rfmoog(active, since 6s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:41 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/613681825' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 22 04:34:41 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 22 04:34:41 np0005591760 systemd[1]: libpod-7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723.scope: Deactivated successfully.
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.973633208 +0000 UTC m=+0.871508685 container died 7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723 (image=quay.io/ceph/ceph:v19, name=lucid_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 04:34:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-bca3291637d68f65966d57fa1dfdace9e52be897e708af8354acd239539e8672-merged.mount: Deactivated successfully.
Jan 22 04:34:41 np0005591760 podman[91464]: 2026-01-22 09:34:41.995269312 +0000 UTC m=+0.893144789 container remove 7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723 (image=quay.io/ceph/ceph:v19, name=lucid_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 04:34:42 np0005591760 systemd[1]: libpod-conmon-7c839bfaf24d9b92fa7c9c6a6e71cd8cbaa3e5e595a81cb50390510683916723.scope: Deactivated successfully.
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'devicehealth'
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:42.300+0000 7fd14b8bb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  from numpy import show_config as show_numpy_config
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'influx'
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:42.444+0000 7fd14b8bb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:42.508+0000 7fd14b8bb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'insights'
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'iostat'
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'k8sevents'
Jan 22 04:34:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:42.630+0000 7fd14b8bb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:34:42 np0005591760 python3[91660]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:34:42 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'localpool'
Jan 22 04:34:42 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 22 04:34:42 np0005591760 python3[91731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074482.5001712-37858-280159400666146/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 04:34:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mirroring'
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'nfs'
Jan 22 04:34:43 np0005591760 python3[91781]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:43 np0005591760 podman[91782]: 2026-01-22 09:34:43.433169583 +0000 UTC m=+0.027431823 container create ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea (image=quay.io/ceph/ceph:v19, name=dreamy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:34:43 np0005591760 systemd[1]: Started libpod-conmon-ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea.scope.
Jan 22 04:34:43 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989e493b2b58d3a01e0d4da3e3bef4bb17d0d46f11ad7ab914dcc69aaa994bc0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989e493b2b58d3a01e0d4da3e3bef4bb17d0d46f11ad7ab914dcc69aaa994bc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:43 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989e493b2b58d3a01e0d4da3e3bef4bb17d0d46f11ad7ab914dcc69aaa994bc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:43 np0005591760 podman[91782]: 2026-01-22 09:34:43.498256795 +0000 UTC m=+0.092519056 container init ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea (image=quay.io/ceph/ceph:v19, name=dreamy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'orchestrator'
Jan 22 04:34:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:43.499+0000 7fd14b8bb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 podman[91782]: 2026-01-22 09:34:43.512876013 +0000 UTC m=+0.107138245 container start ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea (image=quay.io/ceph/ceph:v19, name=dreamy_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:43 np0005591760 podman[91782]: 2026-01-22 09:34:43.422332914 +0000 UTC m=+0.016595175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:43 np0005591760 podman[91782]: 2026-01-22 09:34:43.514098554 +0000 UTC m=+0.108360806 container attach ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea (image=quay.io/ceph/ceph:v19, name=dreamy_napier, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:43.689+0000 7fd14b8bb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 04:34:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:43.756+0000 7fd14b8bb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_support'
Jan 22 04:34:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:43.815+0000 7fd14b8bb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 04:34:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:43.884+0000 7fd14b8bb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'progress'
Jan 22 04:34:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:43.946+0000 7fd14b8bb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:34:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'prometheus'
Jan 22 04:34:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:44.245+0000 7fd14b8bb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rbd_support'
Jan 22 04:34:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:44.331+0000 7fd14b8bb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'restful'
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rgw'
Jan 22 04:34:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:44.707+0000 7fd14b8bb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:34:44 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rook'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.190+0000 7fd14b8bb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'selftest'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.252+0000 7fd14b8bb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'snap_schedule'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.322+0000 7fd14b8bb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'stats'
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'status'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.449+0000 7fd14b8bb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telegraf'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.511+0000 7fd14b8bb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telemetry'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.645+0000 7fd14b8bb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 04:34:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:45.834+0000 7fd14b8bb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'volumes'
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd restarted
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd started
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona restarted
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona started
Jan 22 04:34:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:46.066+0000 7fd14b8bb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'zabbix'
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.rfmoog(active, since 10s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:46.132+0000 7fd14b8bb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rfmoog restarted
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rfmoog
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: ms_deliver_dispatch: unhandled message 0x559b6122d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  1: '-n'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  2: 'mgr.compute-0.rfmoog'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  3: '-f'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  4: '--setuser'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  5: 'ceph'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  6: '--setgroup'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  7: 'ceph'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr respawn  8: '--default-log-to-file=false'
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 22 04:34:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.rfmoog(active, starting, since 0.00985786s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setuser ceph since I am not root
Jan 22 04:34:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setgroup ceph since I am not root
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: pidfile_write: ignore empty --pid-file
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'alerts'
Jan 22 04:34:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:46.312+0000 7fc05e115140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'balancer'
Jan 22 04:34:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:46.382+0000 7fc05e115140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'cephadm'
Jan 22 04:34:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'crash'
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:47.056+0000 7fc05e115140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'dashboard'
Jan 22 04:34:47 np0005591760 ceph-mon[74254]: Active manager daemon compute-0.rfmoog restarted
Jan 22 04:34:47 np0005591760 ceph-mon[74254]: Activating manager daemon compute-0.rfmoog
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'devicehealth'
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:47.595+0000 7fc05e115140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  from numpy import show_config as show_numpy_config
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:47.737+0000 7fc05e115140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'influx'
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:47.799+0000 7fc05e115140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'insights'
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'iostat'
Jan 22 04:34:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:47.917+0000 7fc05e115140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:34:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'k8sevents'
Jan 22 04:34:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'localpool'
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mirroring'
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'nfs'
Jan 22 04:34:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:48.769+0000 7fc05e115140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'orchestrator'
Jan 22 04:34:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:48.957+0000 7fc05e115140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.023+0000 7fc05e115140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_support'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.081+0000 7fc05e115140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.150+0000 7fc05e115140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'progress'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.211+0000 7fc05e115140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'prometheus'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.510+0000 7fc05e115140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rbd_support'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.595+0000 7fc05e115140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'restful'
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rgw'
Jan 22 04:34:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:49.969+0000 7fc05e115140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:34:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rook'
Jan 22 04:34:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:50.451+0000 7fc05e115140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'selftest'
Jan 22 04:34:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:50.513+0000 7fc05e115140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'snap_schedule'
Jan 22 04:34:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:50.582+0000 7fc05e115140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'stats'
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'status'
Jan 22 04:34:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:50.710+0000 7fc05e115140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telegraf'
Jan 22 04:34:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:50.772+0000 7fc05e115140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telemetry'
Jan 22 04:34:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:50.905+0000 7fc05e115140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:34:50 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 04:34:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:51.096+0000 7fc05e115140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'volumes'
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd restarted
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd started
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona restarted
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona started
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.rfmoog(active, starting, since 5s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:51.326+0000 7fc05e115140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'zabbix'
Jan 22 04:34:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:34:51.387+0000 7fc05e115140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rfmoog restarted
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rfmoog
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: ms_deliver_dispatch: unhandled message 0x55b8a06c7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.rfmoog(active, starting, since 0.00953573s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map Activating!
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map I am now activating
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rfmoog", "id": "compute-0.rfmoog"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rfmoog", "id": "compute-0.rfmoog"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.upcmhd", "id": "compute-1.upcmhd"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-1.upcmhd", "id": "compute-1.upcmhd"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.bisona", "id": "compute-2.bisona"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr metadata", "who": "compute-2.bisona", "id": "compute-2.bisona"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e1 all = 1
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: balancer
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Manager daemon compute-0.rfmoog is now available
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [balancer INFO root] Starting
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:34:51
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: cephadm
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: crash
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: dashboard
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO sso] Loading SSO DB version=1
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: devicehealth
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: iostat
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Starting
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: nfs
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: orchestrator
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: pg_autoscaler
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: progress
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [progress INFO root] Loading...
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fc0014da6d0>, <progress.module.GhostEvent object at 0x7fc0014da910>, <progress.module.GhostEvent object at 0x7fc0014da940>, <progress.module.GhostEvent object at 0x7fc0014da970>, <progress.module.GhostEvent object at 0x7fc0014da9a0>] historic events
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] recovery thread starting
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] starting setup
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: rbd_support
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: restful
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: status
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: telemetry
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [restful WARNING root] server not running: no certificate configured
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] PerfHandler: starting
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TaskHandler: starting
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"} v 0)
Jan 22 04:34:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] setup complete
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: volumes
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 22 04:34:51 np0005591760 systemd-logind[747]: New session 35 of user ceph-admin.
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 22 04:34:51 np0005591760 systemd[1]: Started Session 35 of User ceph-admin.
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 22 04:34:51 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.module] Engine started.
Jan 22 04:34:52 np0005591760 podman[92086]: 2026-01-22 09:34:52.246396454 +0000 UTC m=+0.039834681 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: Active manager daemon compute-0.rfmoog restarted
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: Activating manager daemon compute-0.rfmoog
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: Manager daemon compute-0.rfmoog is now available
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:34:52 np0005591760 podman[92086]: 2026-01-22 09:34:52.322976494 +0000 UTC m=+0.116414742 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.rfmoog(active, since 1.02482s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14481 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v3: 11 pgs: 11 active+clean; 454 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 04:34:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0[74250]: 2026-01-22T09:34:52.433+0000 7f397d526640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e2 new map
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-22T09:34:52:434317+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:34:52.434283+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 22 04:34:52 np0005591760 systemd[1]: libpod-ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea.scope: Deactivated successfully.
Jan 22 04:34:52 np0005591760 podman[91782]: 2026-01-22 09:34:52.468440534 +0000 UTC m=+9.062702775 container died ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea (image=quay.io/ceph/ceph:v19, name=dreamy_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:34:52 np0005591760 systemd[1]: var-lib-containers-storage-overlay-989e493b2b58d3a01e0d4da3e3bef4bb17d0d46f11ad7ab914dcc69aaa994bc0-merged.mount: Deactivated successfully.
Jan 22 04:34:52 np0005591760 podman[91782]: 2026-01-22 09:34:52.498209245 +0000 UTC m=+9.092471486 container remove ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea (image=quay.io/ceph/ceph:v19, name=dreamy_napier, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:34:52 np0005591760 systemd[1]: libpod-conmon-ac5f6d3dc3dec0a9b92baccb2cc2b27ba3d3683e923a78e5306386d7b23124ea.scope: Deactivated successfully.
Jan 22 04:34:52 np0005591760 podman[92192]: 2026-01-22 09:34:52.656461181 +0000 UTC m=+0.045367902 container exec 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:34:52 np0005591760 podman[92192]: 2026-01-22 09:34:52.665217075 +0000 UTC m=+0.054123796 container exec_died 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:52] ENGINE Bus STARTING
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:52] ENGINE Bus STARTING
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 python3[92232]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:52] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:52] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:52] ENGINE Client ('192.168.122.100', 34688) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:52] ENGINE Client ('192.168.122.100', 34688) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:34:52 np0005591760 podman[92286]: 2026-01-22 09:34:52.812338379 +0000 UTC m=+0.028981784 container create 5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872 (image=quay.io/ceph/ceph:v19, name=amazing_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:52 np0005591760 systemd[1]: Started libpod-conmon-5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872.scope.
Jan 22 04:34:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ff9225ed83e3cb6f152a83fc66ec6394235fb0a49b52743db85289d0503082/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ff9225ed83e3cb6f152a83fc66ec6394235fb0a49b52743db85289d0503082/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ff9225ed83e3cb6f152a83fc66ec6394235fb0a49b52743db85289d0503082/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:52 np0005591760 podman[92286]: 2026-01-22 09:34:52.868137391 +0000 UTC m=+0.084780796 container init 5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872 (image=quay.io/ceph/ceph:v19, name=amazing_lehmann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:34:52 np0005591760 podman[92286]: 2026-01-22 09:34:52.873495341 +0000 UTC m=+0.090138736 container start 5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872 (image=quay.io/ceph/ceph:v19, name=amazing_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:52 np0005591760 podman[92286]: 2026-01-22 09:34:52.874725598 +0000 UTC m=+0.091368992 container attach 5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872 (image=quay.io/ceph/ceph:v19, name=amazing_lehmann, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:52 np0005591760 podman[92286]: 2026-01-22 09:34:52.799901917 +0000 UTC m=+0.016545332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:52] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:52] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:34:52] ENGINE Bus STARTED
Jan 22 04:34:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:34:52] ENGINE Bus STARTED
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 amazing_lehmann[92334]: Scheduled mds.cephfs update...
Jan 22 04:34:53 np0005591760 podman[92286]: 2026-01-22 09:34:53.183383918 +0000 UTC m=+0.400027314 container died 5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872 (image=quay.io/ceph/ceph:v19, name=amazing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:34:53 np0005591760 systemd[1]: libpod-5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872.scope: Deactivated successfully.
Jan 22 04:34:53 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d6ff9225ed83e3cb6f152a83fc66ec6394235fb0a49b52743db85289d0503082-merged.mount: Deactivated successfully.
Jan 22 04:34:53 np0005591760 podman[92286]: 2026-01-22 09:34:53.204822771 +0000 UTC m=+0.421466166 container remove 5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872 (image=quay.io/ceph/ceph:v19, name=amazing_lehmann, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:53 np0005591760 systemd[1]: libpod-conmon-5568de2aedb921a3ccfd954be6366823e2d38332914e97d2ccb3d57576725872.scope: Deactivated successfully.
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v5: 11 pgs: 11 active+clean; 454 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:52] ENGINE Bus STARTING
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:52] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:52] ENGINE Client ('192.168.122.100', 34688) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:52] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:34:52] ENGINE Bus STARTED
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 python3[92471]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 04:34:53 np0005591760 podman[92484]: 2026-01-22 09:34:53.507210868 +0000 UTC m=+0.028261030 container create a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0 (image=quay.io/ceph/ceph:v19, name=elated_bhaskara, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:34:53 np0005591760 systemd[1]: Started libpod-conmon-a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0.scope.
Jan 22 04:34:53 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d9d2f0fb1c66bdf8e11c0c366ca3737240a4ef964ca5ed56fbbfaea981fe16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d9d2f0fb1c66bdf8e11c0c366ca3737240a4ef964ca5ed56fbbfaea981fe16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d9d2f0fb1c66bdf8e11c0c366ca3737240a4ef964ca5ed56fbbfaea981fe16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:53 np0005591760 podman[92484]: 2026-01-22 09:34:53.564455733 +0000 UTC m=+0.085505906 container init a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0 (image=quay.io/ceph/ceph:v19, name=elated_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:53 np0005591760 podman[92484]: 2026-01-22 09:34:53.568988564 +0000 UTC m=+0.090038726 container start a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0 (image=quay.io/ceph/ceph:v19, name=elated_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:53 np0005591760 podman[92484]: 2026-01-22 09:34:53.573802224 +0000 UTC m=+0.094852376 container attach a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0 (image=quay.io/ceph/ceph:v19, name=elated_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.7M
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.7M
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:53 np0005591760 podman[92484]: 2026-01-22 09:34:53.496489278 +0000 UTC m=+0.017539430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:53 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Jan 22 04:34:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.rfmoog(active, since 2s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Adjusting osd_memory_target on compute-1 to 128.7M
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Unable to set osd_memory_target on compute-1 to 134966067: error parsing value: Value '134966067' is below minimum 939524096
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Jan 22 04:34:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 285a052d-aaac-419e-ad22-d41b36a9f7c9 (Updating node-exporter deployment (+2 -> 3))
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v7: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: Deploying daemon node-exporter.compute-1 on compute-1
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:55 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:34:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:55 np0005591760 systemd[1]: libpod-a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0.scope: Deactivated successfully.
Jan 22 04:34:55 np0005591760 podman[92484]: 2026-01-22 09:34:55.667319035 +0000 UTC m=+2.188369197 container died a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0 (image=quay.io/ceph/ceph:v19, name=elated_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 04:34:55 np0005591760 systemd[1]: var-lib-containers-storage-overlay-90d9d2f0fb1c66bdf8e11c0c366ca3737240a4ef964ca5ed56fbbfaea981fe16-merged.mount: Deactivated successfully.
Jan 22 04:34:55 np0005591760 podman[92484]: 2026-01-22 09:34:55.687448793 +0000 UTC m=+2.208498945 container remove a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0 (image=quay.io/ceph/ceph:v19, name=elated_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:55 np0005591760 systemd[1]: libpod-conmon-a31c135472404a42a617d55bd935e4d139503cafe34d05a3632e944804d868f0.scope: Deactivated successfully.
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.rfmoog(active, since 4s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:56 np0005591760 ceph-mgr[74522]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 22 04:34:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 22 04:34:56 np0005591760 python3[93527]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 04:34:56 np0005591760 python3[93600]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074496.5018718-37912-54201642177311/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=6b7917605681093964532d08a385bc3f0474a26c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:57 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Jan 22 04:34:57 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Jan 22 04:34:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v10: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:34:57 np0005591760 python3[93650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.509338682 +0000 UTC m=+0.028315673 container create a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794 (image=quay.io/ceph/ceph:v19, name=brave_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:34:57 np0005591760 systemd[1]: Started libpod-conmon-a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794.scope.
Jan 22 04:34:57 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:57 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d28a30291c0233541a28ac90bdf5789ad3544bda882b62a7dd13cd2ca9d63d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:57 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d28a30291c0233541a28ac90bdf5789ad3544bda882b62a7dd13cd2ca9d63d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.572354839 +0000 UTC m=+0.091331830 container init a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794 (image=quay.io/ceph/ceph:v19, name=brave_jepsen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.57672416 +0000 UTC m=+0.095701150 container start a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794 (image=quay.io/ceph/ceph:v19, name=brave_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.577964695 +0000 UTC m=+0.096941686 container attach a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794 (image=quay.io/ceph/ceph:v19, name=brave_jepsen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.498556938 +0000 UTC m=+0.017533969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: Deploying daemon node-exporter.compute-2 on compute-2
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 04:34:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 04:34:57 np0005591760 systemd[1]: libpod-a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794.scope: Deactivated successfully.
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.911339505 +0000 UTC m=+0.430316506 container died a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794 (image=quay.io/ceph/ceph:v19, name=brave_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:57 np0005591760 systemd[1]: var-lib-containers-storage-overlay-82d28a30291c0233541a28ac90bdf5789ad3544bda882b62a7dd13cd2ca9d63d-merged.mount: Deactivated successfully.
Jan 22 04:34:57 np0005591760 podman[93651]: 2026-01-22 09:34:57.930696631 +0000 UTC m=+0.449673622 container remove a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794 (image=quay.io/ceph/ceph:v19, name=brave_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:57 np0005591760 systemd[1]: libpod-conmon-a33f31549d130db137c51b75d22200d8989a6b61f055cbac6bfaaac040401794.scope: Deactivated successfully.
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.rfmoog(active, since 6s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:34:58 np0005591760 python3[93723]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/1531131488' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 04:34:58 np0005591760 ceph-mon[74254]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 04:34:58 np0005591760 podman[93725]: 2026-01-22 09:34:58.654123724 +0000 UTC m=+0.027386548 container create 81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63 (image=quay.io/ceph/ceph:v19, name=busy_hofstadter, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:58 np0005591760 systemd[1]: Started libpod-conmon-81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63.scope.
Jan 22 04:34:58 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9378010e1bd6cb9819d021ce6a1596b4e8f46cc72f7663c94263ad072800addc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9378010e1bd6cb9819d021ce6a1596b4e8f46cc72f7663c94263ad072800addc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:58 np0005591760 podman[93725]: 2026-01-22 09:34:58.7104588 +0000 UTC m=+0.083721653 container init 81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63 (image=quay.io/ceph/ceph:v19, name=busy_hofstadter, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:34:58 np0005591760 podman[93725]: 2026-01-22 09:34:58.714825045 +0000 UTC m=+0.088087878 container start 81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63 (image=quay.io/ceph/ceph:v19, name=busy_hofstadter, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:58 np0005591760 podman[93725]: 2026-01-22 09:34:58.715997462 +0000 UTC m=+0.089260314 container attach 81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63 (image=quay.io/ceph/ceph:v19, name=busy_hofstadter, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:58 np0005591760 podman[93725]: 2026-01-22 09:34:58.64325133 +0000 UTC m=+0.016514182 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3866496445' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 04:34:59 np0005591760 busy_hofstadter[93739]: 
Jan 22 04:34:59 np0005591760 busy_hofstadter[93739]: {"fsid":"43df7a30-cf5f-5209-adfd-bf44298b19f2","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":55,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1769074461,"num_in_osds":3,"osd_in_since":1769074446,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":11},{"state_name":"unknown","count":1}],"num_pgs":12,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":84307968,"bytes_avail":64327618560,"bytes_total":64411926528,"unknown_pgs_ratio":0.083333335816860199},"fsmap":{"epoch":2,"btime":"2026-01-22T09:34:52:434317+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2026-01-22T09:34:53.405272+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.rfmoog":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.upcmhd":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.bisona":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24184":{"start_epoch":3,"start_stamp":"2026-01-22T09:34:52.427888+0000","gid":24184,"addr":"192.168.122.102:0/3366931585","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.aqqfbf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865364","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"30fc5159-0993-4ff0-a95e-d1f2df875388","zone_name":"default","zonegroup_id":"466b069f-ae0a-4d3b-a92d-186e5cb7d7b9","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"0ba2771f-8592-42ff-83ca-6865af7a769e":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"285a052d-aaac-419e-ad22-d41b36a9f7c9":{"message":"Updating node-exporter deployment (+2 -> 3) (1s)\n      [==============..............] (remaining: 1s)","progress":0.5,"add_to_ceph_s":true}}}
Jan 22 04:34:59 np0005591760 systemd[1]: libpod-81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63.scope: Deactivated successfully.
Jan 22 04:34:59 np0005591760 conmon[93739]: conmon 81e856b9e0d19bf46dd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63.scope/container/memory.events
Jan 22 04:34:59 np0005591760 podman[93764]: 2026-01-22 09:34:59.071209106 +0000 UTC m=+0.014928135 container died 81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63 (image=quay.io/ceph/ceph:v19, name=busy_hofstadter, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:34:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9378010e1bd6cb9819d021ce6a1596b4e8f46cc72f7663c94263ad072800addc-merged.mount: Deactivated successfully.
Jan 22 04:34:59 np0005591760 podman[93764]: 2026-01-22 09:34:59.088106412 +0000 UTC m=+0.031825420 container remove 81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63 (image=quay.io/ceph/ceph:v19, name=busy_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:34:59 np0005591760 systemd[1]: libpod-conmon-81e856b9e0d19bf46dd06b0fb5edde5b3151c75d4fece4f890838c210838bf63.scope: Deactivated successfully.
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 285a052d-aaac-419e-ad22-d41b36a9f7c9 (Updating node-exporter deployment (+2 -> 3))
Jan 22 04:34:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 285a052d-aaac-419e-ad22-d41b36a9f7c9 (Updating node-exporter deployment (+2 -> 3)) in 4 seconds
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:34:59 np0005591760 python3[93851]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.395342104 +0000 UTC m=+0.027862657 container create 26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b (image=quay.io/ceph/ceph:v19, name=mystifying_kapitsa, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:34:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v11: 12 pgs: 12 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 04:34:59 np0005591760 systemd[1]: Started libpod-conmon-26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b.scope.
Jan 22 04:34:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aef43fcbc1d0fd35746a606df8cef7642bb8feb3b4ee6d0b434d3dbcb401c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25aef43fcbc1d0fd35746a606df8cef7642bb8feb3b4ee6d0b434d3dbcb401c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.450303062 +0000 UTC m=+0.082823625 container init 26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b (image=quay.io/ceph/ceph:v19, name=mystifying_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.454497211 +0000 UTC m=+0.087017755 container start 26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b (image=quay.io/ceph/ceph:v19, name=mystifying_kapitsa, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.455687162 +0000 UTC m=+0.088207704 container attach 26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b (image=quay.io/ceph/ceph:v19, name=mystifying_kapitsa, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.384114567 +0000 UTC m=+0.016635140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.536849955 +0000 UTC m=+0.026253155 container create f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:59 np0005591760 systemd[1]: Started libpod-conmon-f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9.scope.
Jan 22 04:34:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.586111968 +0000 UTC m=+0.075515177 container init f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cerf, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.589742913 +0000 UTC m=+0.079146112 container start f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cerf, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:59 np0005591760 keen_cerf[93931]: 167 167
Jan 22 04:34:59 np0005591760 systemd[1]: libpod-f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9.scope: Deactivated successfully.
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.594254533 +0000 UTC m=+0.083657752 container attach f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cerf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.594416921 +0000 UTC m=+0.083820119 container died f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cerf, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:34:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1ebef84bed1466cb13d412b65bf4d997221e2a51d4eb0c350cf223c7875ac97c-merged.mount: Deactivated successfully.
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.617657127 +0000 UTC m=+0.107060326 container remove f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:34:59 np0005591760 podman[93899]: 2026-01-22 09:34:59.525581562 +0000 UTC m=+0.014984781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:59 np0005591760 systemd[1]: libpod-conmon-f35e541c7e8b5d32f143dc821fbeb757aad864b7d130f0d1fabffe4ced5a0fb9.scope: Deactivated successfully.
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:34:59 np0005591760 podman[93953]: 2026-01-22 09:34:59.72957117 +0000 UTC m=+0.027033210 container create 4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dewdney, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:34:59 np0005591760 systemd[1]: Started libpod-conmon-4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7.scope.
Jan 22 04:34:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d422060dea3ec93011a8ffa01ec891c52d2df47e3ba0c0b8ff0e71de6eae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d422060dea3ec93011a8ffa01ec891c52d2df47e3ba0c0b8ff0e71de6eae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d422060dea3ec93011a8ffa01ec891c52d2df47e3ba0c0b8ff0e71de6eae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d422060dea3ec93011a8ffa01ec891c52d2df47e3ba0c0b8ff0e71de6eae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9d422060dea3ec93011a8ffa01ec891c52d2df47e3ba0c0b8ff0e71de6eae9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:34:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545240948' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:34:59 np0005591760 mystifying_kapitsa[93884]: 
Jan 22 04:34:59 np0005591760 mystifying_kapitsa[93884]: {"epoch":3,"fsid":"43df7a30-cf5f-5209-adfd-bf44298b19f2","modified":"2026-01-22T09:33:58.139199Z","created":"2026-01-22T09:32:15.320230Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 22 04:34:59 np0005591760 mystifying_kapitsa[93884]: dumped monmap epoch 3
Jan 22 04:34:59 np0005591760 podman[93953]: 2026-01-22 09:34:59.790682625 +0000 UTC m=+0.088144666 container init 4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dewdney, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 04:34:59 np0005591760 podman[93953]: 2026-01-22 09:34:59.796241474 +0000 UTC m=+0.093703515 container start 4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dewdney, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:59 np0005591760 podman[93953]: 2026-01-22 09:34:59.79852944 +0000 UTC m=+0.095991501 container attach 4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dewdney, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:34:59 np0005591760 systemd[1]: libpod-26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b.scope: Deactivated successfully.
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.804820505 +0000 UTC m=+0.437341057 container died 26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b (image=quay.io/ceph/ceph:v19, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:34:59 np0005591760 podman[93953]: 2026-01-22 09:34:59.718731576 +0000 UTC m=+0.016193637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:34:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-25aef43fcbc1d0fd35746a606df8cef7642bb8feb3b4ee6d0b434d3dbcb401c5-merged.mount: Deactivated successfully.
Jan 22 04:34:59 np0005591760 podman[93854]: 2026-01-22 09:34:59.824228265 +0000 UTC m=+0.456748818 container remove 26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b (image=quay.io/ceph/ceph:v19, name=mystifying_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:34:59 np0005591760 systemd[1]: libpod-conmon-26155bbbc26eaac2f9dcab4d4f9c0f308b670cd96cd7ba92340746802a9a9c5b.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 sharp_dewdney[93966]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:35:00 np0005591760 sharp_dewdney[93966]: --> All data devices are unavailable
Jan 22 04:35:00 np0005591760 systemd[1]: libpod-4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 podman[93953]: 2026-01-22 09:35:00.067894956 +0000 UTC m=+0.365356996 container died 4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dewdney, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:35:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-de9d422060dea3ec93011a8ffa01ec891c52d2df47e3ba0c0b8ff0e71de6eae9-merged.mount: Deactivated successfully.
Jan 22 04:35:00 np0005591760 podman[93953]: 2026-01-22 09:35:00.093036769 +0000 UTC m=+0.390498809 container remove 4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:00 np0005591760 systemd[1]: libpod-conmon-4e84eb887e5f25ac78a8075036adc8df5f4de32f4a20db231c4c0e973104f9c7.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 python3[94076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:00 np0005591760 podman[94089]: 2026-01-22 09:35:00.426026912 +0000 UTC m=+0.031703359 container create 935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e (image=quay.io/ceph/ceph:v19, name=hopeful_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:00 np0005591760 systemd[1]: Started libpod-conmon-935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e.scope.
Jan 22 04:35:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c15afde67fb6f314fa42f91a4eb81eb7393e51de1abc259674f6020399c5d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c15afde67fb6f314fa42f91a4eb81eb7393e51de1abc259674f6020399c5d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:00 np0005591760 podman[94089]: 2026-01-22 09:35:00.480926533 +0000 UTC m=+0.086602980 container init 935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e (image=quay.io/ceph/ceph:v19, name=hopeful_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:35:00 np0005591760 podman[94089]: 2026-01-22 09:35:00.485274764 +0000 UTC m=+0.090951211 container start 935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e (image=quay.io/ceph/ceph:v19, name=hopeful_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:35:00 np0005591760 podman[94089]: 2026-01-22 09:35:00.486286236 +0000 UTC m=+0.091962683 container attach 935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e (image=quay.io/ceph/ceph:v19, name=hopeful_tharp, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:00 np0005591760 podman[94089]: 2026-01-22 09:35:00.412413744 +0000 UTC m=+0.018090211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.531183318 +0000 UTC m=+0.028345210 container create 197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_merkle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:00 np0005591760 systemd[1]: Started libpod-conmon-197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905.scope.
Jan 22 04:35:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.582509394 +0000 UTC m=+0.079671306 container init 197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.586803602 +0000 UTC m=+0.083965484 container start 197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.58801378 +0000 UTC m=+0.085175672 container attach 197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_merkle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:00 np0005591760 naughty_merkle[94137]: 167 167
Jan 22 04:35:00 np0005591760 systemd[1]: libpod-197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.590059067 +0000 UTC m=+0.087220960 container died 197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.607268202 +0000 UTC m=+0.104430094 container remove 197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:00 np0005591760 podman[94124]: 2026-01-22 09:35:00.519736547 +0000 UTC m=+0.016898459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:00 np0005591760 systemd[1]: libpod-conmon-197d895c7756bd60603b0b5cbf17c4b64b7c5d78fb484c8b819b0f1de8645905.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 podman[94179]: 2026-01-22 09:35:00.716739225 +0000 UTC m=+0.026081269 container create ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3ee0a035d5aad558585927f021f16e77e1bead2e72927d03e4ca69b995af0903-merged.mount: Deactivated successfully.
Jan 22 04:35:00 np0005591760 systemd[1]: Started libpod-conmon-ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d.scope.
Jan 22 04:35:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5711b61ba50a0a3e73d67400b5858ea4f7271c84765a9d78ed5bd5fc070100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5711b61ba50a0a3e73d67400b5858ea4f7271c84765a9d78ed5bd5fc070100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5711b61ba50a0a3e73d67400b5858ea4f7271c84765a9d78ed5bd5fc070100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5711b61ba50a0a3e73d67400b5858ea4f7271c84765a9d78ed5bd5fc070100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:00 np0005591760 podman[94179]: 2026-01-22 09:35:00.769238599 +0000 UTC m=+0.078580664 container init ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:35:00 np0005591760 podman[94179]: 2026-01-22 09:35:00.774173258 +0000 UTC m=+0.083515302 container start ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:00 np0005591760 podman[94179]: 2026-01-22 09:35:00.777589477 +0000 UTC m=+0.086931531 container attach ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:00 np0005591760 podman[94179]: 2026-01-22 09:35:00.705881778 +0000 UTC m=+0.015223843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 22 04:35:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3300870308' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 04:35:00 np0005591760 hopeful_tharp[94120]: [client.openstack]
Jan 22 04:35:00 np0005591760 hopeful_tharp[94120]: #011key = AQB/7nFpAAAAABAAQNyvDHMP/jPPMACmotCIjQ==
Jan 22 04:35:00 np0005591760 hopeful_tharp[94120]: #011caps mgr = "allow *"
Jan 22 04:35:00 np0005591760 hopeful_tharp[94120]: #011caps mon = "profile rbd"
Jan 22 04:35:00 np0005591760 hopeful_tharp[94120]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 22 04:35:00 np0005591760 systemd[1]: libpod-935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 podman[94199]: 2026-01-22 09:35:00.871103553 +0000 UTC m=+0.017382345 container died 935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e (image=quay.io/ceph/ceph:v19, name=hopeful_tharp, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b1c15afde67fb6f314fa42f91a4eb81eb7393e51de1abc259674f6020399c5d9-merged.mount: Deactivated successfully.
Jan 22 04:35:00 np0005591760 podman[94199]: 2026-01-22 09:35:00.888269797 +0000 UTC m=+0.034548589 container remove 935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e (image=quay.io/ceph/ceph:v19, name=hopeful_tharp, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:00 np0005591760 systemd[1]: libpod-conmon-935445ac666e9789484bff2fb2427eeb07e9624d660ebfb8793d31cbca6dbe1e.scope: Deactivated successfully.
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]: {
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:    "0": [
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:        {
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "devices": [
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "/dev/loop3"
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            ],
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "lv_name": "ceph_lv0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "lv_size": "21470642176",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "name": "ceph_lv0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "tags": {
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.cluster_name": "ceph",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.crush_device_class": "",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.encrypted": "0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.osd_id": "0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.type": "block",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.vdo": "0",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:                "ceph.with_tpm": "0"
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            },
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "type": "block",
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:            "vg_name": "ceph_vg0"
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:        }
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]:    ]
Jan 22 04:35:00 np0005591760 peaceful_hodgkin[94192]: }
Jan 22 04:35:01 np0005591760 systemd[1]: libpod-ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d.scope: Deactivated successfully.
Jan 22 04:35:01 np0005591760 podman[94179]: 2026-01-22 09:35:01.013102963 +0000 UTC m=+0.322445007 container died ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hodgkin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6b5711b61ba50a0a3e73d67400b5858ea4f7271c84765a9d78ed5bd5fc070100-merged.mount: Deactivated successfully.
Jan 22 04:35:01 np0005591760 podman[94179]: 2026-01-22 09:35:01.03353942 +0000 UTC m=+0.342881464 container remove ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:35:01 np0005591760 systemd[1]: libpod-conmon-ed005574c3446909733ccfda6a214fd14aa2a4a53b37b54e3c854600a696e19d.scope: Deactivated successfully.
Jan 22 04:35:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v12: 12 pgs: 12 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.424617913 +0000 UTC m=+0.027711292 container create bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:35:01 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 6 completed events
Jan 22 04:35:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:35:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:01 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 0ba2771f-8592-42ff-83ca-6865af7a769e (Global Recovery Event) in 5 seconds
Jan 22 04:35:01 np0005591760 systemd[1]: Started libpod-conmon-bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3.scope.
Jan 22 04:35:01 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.480897213 +0000 UTC m=+0.083990592 container init bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.485448518 +0000 UTC m=+0.088541887 container start bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.486881597 +0000 UTC m=+0.089974976 container attach bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:35:01 np0005591760 hardcore_bhabha[94320]: 167 167
Jan 22 04:35:01 np0005591760 systemd[1]: libpod-bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3.scope: Deactivated successfully.
Jan 22 04:35:01 np0005591760 conmon[94320]: conmon bf9bf42bc42292661055 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3.scope/container/memory.events
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.488759227 +0000 UTC m=+0.091852606 container died bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0cff08a4e5c521ff0ca6a3a5f658dcea09f7c9cd483db46dc65929447d81ab51-merged.mount: Deactivated successfully.
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.413092373 +0000 UTC m=+0.016185773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:01 np0005591760 podman[94306]: 2026-01-22 09:35:01.513141734 +0000 UTC m=+0.116235113 container remove bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 04:35:01 np0005591760 systemd[1]: libpod-conmon-bf9bf42bc4229266105508d9f51d54076d4ea9a6f298aa9e5d809dbc56cdd8e3.scope: Deactivated successfully.
Jan 22 04:35:01 np0005591760 podman[94342]: 2026-01-22 09:35:01.624521666 +0000 UTC m=+0.029811061 container create 87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:01 np0005591760 systemd[1]: Started libpod-conmon-87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221.scope.
Jan 22 04:35:01 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:01 np0005591760 ceph-mon[74254]: from='client.? 192.168.122.100:0/3300870308' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 04:35:01 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7206da6f8baa025996f1b7038047f2a700a65abc969004181e6cdb6b444ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7206da6f8baa025996f1b7038047f2a700a65abc969004181e6cdb6b444ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7206da6f8baa025996f1b7038047f2a700a65abc969004181e6cdb6b444ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7206da6f8baa025996f1b7038047f2a700a65abc969004181e6cdb6b444ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:01 np0005591760 podman[94342]: 2026-01-22 09:35:01.670936758 +0000 UTC m=+0.076226162 container init 87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:01 np0005591760 podman[94342]: 2026-01-22 09:35:01.675546402 +0000 UTC m=+0.080835787 container start 87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:01 np0005591760 podman[94342]: 2026-01-22 09:35:01.676605134 +0000 UTC m=+0.081894518 container attach 87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:35:01 np0005591760 podman[94342]: 2026-01-22 09:35:01.612877321 +0000 UTC m=+0.018166736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:02 np0005591760 bold_lehmann[94356]: {}
Jan 22 04:35:02 np0005591760 lvm[94555]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:35:02 np0005591760 lvm[94555]: VG ceph_vg0 finished
Jan 22 04:35:02 np0005591760 systemd[1]: libpod-87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221.scope: Deactivated successfully.
Jan 22 04:35:02 np0005591760 podman[94342]: 2026-01-22 09:35:02.16734301 +0000 UTC m=+0.572632396 container died 87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:35:02 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3e7206da6f8baa025996f1b7038047f2a700a65abc969004181e6cdb6b444ab1-merged.mount: Deactivated successfully.
Jan 22 04:35:02 np0005591760 podman[94342]: 2026-01-22 09:35:02.189334738 +0000 UTC m=+0.594624123 container remove 87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:02 np0005591760 systemd[1]: libpod-conmon-87ad7d8528453c9660259a45e683cd091d36af688d7fbe20cb83de8dc5ddd221.scope: Deactivated successfully.
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:02 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev c06e9944-a32a-4b1b-bb63-5093ecdd57cb (Updating rgw.rgw deployment (+2 -> 3))
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.kjnvpx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.kjnvpx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.kjnvpx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:35:02 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.kjnvpx on compute-1
Jan 22 04:35:02 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.kjnvpx on compute-1
Jan 22 04:35:02 np0005591760 ansible-async_wrapper.py[94593]: Invoked with j222079012296 30 /home/zuul/.ansible/tmp/ansible-tmp-1769074501.908545-37984-75402505419913/AnsiballZ_command.py _
Jan 22 04:35:02 np0005591760 ansible-async_wrapper.py[94596]: Starting module and watcher
Jan 22 04:35:02 np0005591760 ansible-async_wrapper.py[94596]: Start watching 94597 (30)
Jan 22 04:35:02 np0005591760 ansible-async_wrapper.py[94597]: Start module (94597)
Jan 22 04:35:02 np0005591760 ansible-async_wrapper.py[94593]: Return async_wrapper task started.
Jan 22 04:35:02 np0005591760 python3[94598]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.485521431 +0000 UTC m=+0.028595824 container create f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18 (image=quay.io/ceph/ceph:v19, name=happy_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:35:02 np0005591760 systemd[1]: Started libpod-conmon-f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18.scope.
Jan 22 04:35:02 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52333883a9bc299b31d1fea4e54c9351cafab53747e417eb85fbb75d89ce435/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52333883a9bc299b31d1fea4e54c9351cafab53747e417eb85fbb75d89ce435/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.541007121 +0000 UTC m=+0.084081523 container init f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18 (image=quay.io/ceph/ceph:v19, name=happy_margulis, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.545183106 +0000 UTC m=+0.088257498 container start f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18 (image=quay.io/ceph/ceph:v19, name=happy_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.546304727 +0000 UTC m=+0.089379118 container attach f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18 (image=quay.io/ceph/ceph:v19, name=happy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.475141878 +0000 UTC m=+0.018216290 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.kjnvpx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.kjnvpx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:02 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 04:35:02 np0005591760 happy_margulis[94611]: 
Jan 22 04:35:02 np0005591760 happy_margulis[94611]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 04:35:02 np0005591760 systemd[1]: libpod-f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18.scope: Deactivated successfully.
Jan 22 04:35:02 np0005591760 conmon[94611]: conmon f700accf7e27608deb19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18.scope/container/memory.events
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.833450798 +0000 UTC m=+0.376525189 container died f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18 (image=quay.io/ceph/ceph:v19, name=happy_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:35:02 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f52333883a9bc299b31d1fea4e54c9351cafab53747e417eb85fbb75d89ce435-merged.mount: Deactivated successfully.
Jan 22 04:35:02 np0005591760 podman[94599]: 2026-01-22 09:35:02.853711142 +0000 UTC m=+0.396785533 container remove f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18 (image=quay.io/ceph/ceph:v19, name=happy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:02 np0005591760 systemd[1]: libpod-conmon-f700accf7e27608deb1912b4bc693be751d84474948460caeaa492e5ac57fc18.scope: Deactivated successfully.
Jan 22 04:35:02 np0005591760 ansible-async_wrapper.py[94597]: Module complete (94597)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kfoyhi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kfoyhi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kfoyhi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:35:03 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.kfoyhi on compute-0
Jan 22 04:35:03 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.kfoyhi on compute-0
Jan 22 04:35:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v13: 12 pgs: 12 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 04:35:03 np0005591760 python3[94741]: ansible-ansible.legacy.async_status Invoked with jid=j222079012296.94593 mode=status _async_dir=/root/.ansible_async
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: Deploying daemon rgw.rgw.compute-1.kjnvpx on compute-1
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kfoyhi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kfoyhi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:03 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.800323404 +0000 UTC m=+0.032267145 container create cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_edison, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:35:03 np0005591760 systemd[1]: Started libpod-conmon-cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6.scope.
Jan 22 04:35:03 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.857373511 +0000 UTC m=+0.089317252 container init cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_edison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.862038751 +0000 UTC m=+0.093982492 container start cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.864287702 +0000 UTC m=+0.096231443 container attach cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_edison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:35:03 np0005591760 sad_edison[94838]: 167 167
Jan 22 04:35:03 np0005591760 systemd[1]: libpod-cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6.scope: Deactivated successfully.
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.867367636 +0000 UTC m=+0.099311377 container died cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:35:03 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3dadf8cf38726b76913beeec6171afb1eba076dcc1a9c82d89f637056e57bd0c-merged.mount: Deactivated successfully.
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.78727798 +0000 UTC m=+0.019221741 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:03 np0005591760 podman[94825]: 2026-01-22 09:35:03.886674747 +0000 UTC m=+0.118618488 container remove cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:03 np0005591760 python3[94821]: ansible-ansible.legacy.async_status Invoked with jid=j222079012296.94593 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 04:35:03 np0005591760 systemd[1]: libpod-conmon-cd85eafc13c7ef425384a9c0ea972988ebb83f4f9b8586c7403421b6180e2ad6.scope: Deactivated successfully.
Jan 22 04:35:03 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:03 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:03 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:04 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:04 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:04 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:04 np0005591760 systemd[1]: Starting Ceph rgw.rgw.compute-0.kfoyhi for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:04 np0005591760 python3[94956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:04 np0005591760 podman[94994]: 2026-01-22 09:35:04.4749258 +0000 UTC m=+0.031089958 container create e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241 (image=quay.io/ceph/ceph:v19, name=intelligent_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:35:04 np0005591760 podman[95002]: 2026-01-22 09:35:04.493646393 +0000 UTC m=+0.034059744 container create 63af978e8995663487731a09c6261d6511d98fba5f40e1b4439bc3983092b3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-rgw-rgw-compute-0-kfoyhi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:04 np0005591760 systemd[1]: Started libpod-conmon-e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241.scope.
Jan 22 04:35:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d23db75f184400729d4332208e5cd092b88d5256b41b1cf1469f21d1f8e7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d23db75f184400729d4332208e5cd092b88d5256b41b1cf1469f21d1f8e7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e0e080e3af4d5093bf1f2ef31cb9ebf1856b8377d4eede873ac48bc691c9f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e0e080e3af4d5093bf1f2ef31cb9ebf1856b8377d4eede873ac48bc691c9f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e0e080e3af4d5093bf1f2ef31cb9ebf1856b8377d4eede873ac48bc691c9f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e0e080e3af4d5093bf1f2ef31cb9ebf1856b8377d4eede873ac48bc691c9f9/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.kfoyhi supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:04 np0005591760 podman[94994]: 2026-01-22 09:35:04.527040126 +0000 UTC m=+0.083204285 container init e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241 (image=quay.io/ceph/ceph:v19, name=intelligent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:35:04 np0005591760 podman[95002]: 2026-01-22 09:35:04.528499516 +0000 UTC m=+0.068912867 container init 63af978e8995663487731a09c6261d6511d98fba5f40e1b4439bc3983092b3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-rgw-rgw-compute-0-kfoyhi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:35:04 np0005591760 podman[95002]: 2026-01-22 09:35:04.532579558 +0000 UTC m=+0.072992910 container start 63af978e8995663487731a09c6261d6511d98fba5f40e1b4439bc3983092b3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-rgw-rgw-compute-0-kfoyhi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:35:04 np0005591760 bash[95002]: 63af978e8995663487731a09c6261d6511d98fba5f40e1b4439bc3983092b3d3
Jan 22 04:35:04 np0005591760 podman[95002]: 2026-01-22 09:35:04.475540522 +0000 UTC m=+0.015953893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:04 np0005591760 podman[94994]: 2026-01-22 09:35:04.534646086 +0000 UTC m=+0.090810244 container start e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241 (image=quay.io/ceph/ceph:v19, name=intelligent_wright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:04 np0005591760 podman[94994]: 2026-01-22 09:35:04.536126876 +0000 UTC m=+0.092291033 container attach e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241 (image=quay.io/ceph/ceph:v19, name=intelligent_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:35:04 np0005591760 systemd[1]: Started Ceph rgw.rgw.compute-0.kfoyhi for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:04 np0005591760 podman[94994]: 2026-01-22 09:35:04.462673626 +0000 UTC m=+0.018837804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:04 np0005591760 radosgw[95028]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:35:04 np0005591760 radosgw[95028]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Jan 22 04:35:04 np0005591760 radosgw[95028]: framework: beast
Jan 22 04:35:04 np0005591760 radosgw[95028]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 22 04:35:04 np0005591760 radosgw[95028]: init_numa not setting numa affinity
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev c06e9944-a32a-4b1b-bb63-5093ecdd57cb (Updating rgw.rgw deployment (+2 -> 3))
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event c06e9944-a32a-4b1b-bb63-5093ecdd57cb (Updating rgw.rgw deployment (+2 -> 3)) in 2 seconds
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 50036b73-de79-405c-ae00-3f7b708be968 (Updating mds.cephfs deployment (+3 -> 3))
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zwrmjl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zwrmjl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zwrmjl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.zwrmjl on compute-2
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.zwrmjl on compute-2
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: Deploying daemon rgw.rgw.compute-0.kfoyhi on compute-0
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zwrmjl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 04:35:04 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zwrmjl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 04:35:04 np0005591760 radosgw[95028]: v1 topic migration: starting v1 topic migration..
Jan 22 04:35:04 np0005591760 radosgw[95028]: LDAP not started since no server URIs were provided in the configuration.
Jan 22 04:35:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-rgw-rgw-compute-0-kfoyhi[95022]: 2026-01-22T09:35:04.698+0000 7fe8b9be8980 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 22 04:35:04 np0005591760 radosgw[95028]: v1 topic migration: finished v1 topic migration
Jan 22 04:35:04 np0005591760 radosgw[95028]: framework: beast
Jan 22 04:35:04 np0005591760 radosgw[95028]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 22 04:35:04 np0005591760 radosgw[95028]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 22 04:35:04 np0005591760 radosgw[95028]: starting handler: beast
Jan 22 04:35:04 np0005591760 radosgw[95028]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:35:04 np0005591760 radosgw[95028]: mgrc service_daemon_register rgw.24340 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.kfoyhi,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865364,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=30fc5159-0993-4ff0-a95e-d1f2df875388,zone_name=default,zonegroup_id=466b069f-ae0a-4d3b-a92d-186e5cb7d7b9,zonegroup_name=default}
Jan 22 04:35:04 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14601 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 04:35:04 np0005591760 intelligent_wright[95018]: 
Jan 22 04:35:04 np0005591760 intelligent_wright[95018]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 22 04:35:04 np0005591760 systemd[1]: libpod-e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241.scope: Deactivated successfully.
Jan 22 04:35:04 np0005591760 conmon[95018]: conmon e50e268acece07e9d456 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241.scope/container/memory.events
Jan 22 04:35:04 np0005591760 podman[95671]: 2026-01-22 09:35:04.866815437 +0000 UTC m=+0.021822197 container died e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241 (image=quay.io/ceph/ceph:v19, name=intelligent_wright, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:35:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8c2d23db75f184400729d4332208e5cd092b88d5256b41b1cf1469f21d1f8e7f-merged.mount: Deactivated successfully.
Jan 22 04:35:04 np0005591760 podman[95671]: 2026-01-22 09:35:04.888722033 +0000 UTC m=+0.043728773 container remove e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241 (image=quay.io/ceph/ceph:v19, name=intelligent_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:35:04 np0005591760 systemd[1]: libpod-conmon-e50e268acece07e9d456987e69a17e8fb45b78a70b7844e6c970da6ee5463241.scope: Deactivated successfully.
Jan 22 04:35:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v14: 12 pgs: 12 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Jan 22 04:35:05 np0005591760 python3[95707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:05 np0005591760 podman[95708]: 2026-01-22 09:35:05.645723221 +0000 UTC m=+0.027238698 container create cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa (image=quay.io/ceph/ceph:v19, name=epic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:05 np0005591760 systemd[1]: Started libpod-conmon-cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa.scope.
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: Deploying daemon mds.cephfs.compute-2.zwrmjl on compute-2
Jan 22 04:35:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a74c4d4381daab756345ae95f25c3310e4f657694935b79831a895dbf7d2ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a74c4d4381daab756345ae95f25c3310e4f657694935b79831a895dbf7d2ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:05 np0005591760 podman[95708]: 2026-01-22 09:35:05.701218739 +0000 UTC m=+0.082734236 container init cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa (image=quay.io/ceph/ceph:v19, name=epic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:35:05 np0005591760 podman[95708]: 2026-01-22 09:35:05.706382823 +0000 UTC m=+0.087898289 container start cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa (image=quay.io/ceph/ceph:v19, name=epic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:35:05 np0005591760 podman[95708]: 2026-01-22 09:35:05.707469587 +0000 UTC m=+0.088985063 container attach cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa (image=quay.io/ceph/ceph:v19, name=epic_kilby, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 04:35:05 np0005591760 podman[95708]: 2026-01-22 09:35:05.635190428 +0000 UTC m=+0.016705934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xazhzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xazhzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xazhzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:35:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:35:05 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.xazhzz on compute-0
Jan 22 04:35:05 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.xazhzz on compute-0
Jan 22 04:35:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24335 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 04:35:05 np0005591760 epic_kilby[95720]: 
Jan 22 04:35:05 np0005591760 epic_kilby[95720]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 22 04:35:05 np0005591760 systemd[1]: libpod-cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa.scope: Deactivated successfully.
Jan 22 04:35:05 np0005591760 podman[95708]: 2026-01-22 09:35:05.99145343 +0000 UTC m=+0.372968907 container died cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa (image=quay.io/ceph/ceph:v19, name=epic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-32a74c4d4381daab756345ae95f25c3310e4f657694935b79831a895dbf7d2ce-merged.mount: Deactivated successfully.
Jan 22 04:35:06 np0005591760 podman[95708]: 2026-01-22 09:35:06.011949779 +0000 UTC m=+0.393465257 container remove cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa (image=quay.io/ceph/ceph:v19, name=epic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:06 np0005591760 systemd[1]: libpod-conmon-cd3a7d5135ff184f8b4015ab50fd0c4ea0c3eb310ed8722bbfcc87a6b2dc06aa.scope: Deactivated successfully.
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.189161502 +0000 UTC m=+0.025533885 container create 33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:06 np0005591760 systemd[1]: Started libpod-conmon-33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408.scope.
Jan 22 04:35:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.232733798 +0000 UTC m=+0.069106201 container init 33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.237068935 +0000 UTC m=+0.073441318 container start 33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:35:06 np0005591760 epic_mcclintock[95853]: 167 167
Jan 22 04:35:06 np0005591760 systemd[1]: libpod-33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408.scope: Deactivated successfully.
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.240279604 +0000 UTC m=+0.076651978 container attach 33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:06 np0005591760 conmon[95853]: conmon 33adc70c5dae8378dc8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408.scope/container/memory.events
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.240835556 +0000 UTC m=+0.077207949 container died 33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:35:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-702cb2789736b424f1118725de9c3199fe16fbd2e96e502bb0f4c9050b64366d-merged.mount: Deactivated successfully.
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.257900829 +0000 UTC m=+0.094273212 container remove 33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:35:06 np0005591760 podman[95840]: 2026-01-22 09:35:06.17896818 +0000 UTC m=+0.015340584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:06 np0005591760 systemd[1]: libpod-conmon-33adc70c5dae8378dc8f98d13bb1780596b49b52385d1af8c58fd79e7e477408.scope: Deactivated successfully.
Jan 22 04:35:06 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:06 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:06 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:06 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 8 completed events
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:06 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:06 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e3 new map
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-22T09:35:06:688450+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:34:52.434283+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zwrmjl{-1:24346} state up:standby seq 1 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] up:boot
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] as mds.0
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zwrmjl assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.zwrmjl"} v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zwrmjl"}]: dispatch
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e3 all = 0
Jan 22 04:35:06 np0005591760 systemd[1]: Starting Ceph mds.cephfs.compute-0.xazhzz for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e4 new map
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-22T09:35:06:694486+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:35:06.694480+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24346}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.zwrmjl{0:24346} state up:creating seq 1 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zwrmjl=up:creating}
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zwrmjl is now active in filesystem cephfs as rank 0
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xazhzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xazhzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: Deploying daemon mds.cephfs.compute-0.xazhzz on compute-0
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: daemon mds.cephfs.compute-2.zwrmjl assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: Cluster is now healthy
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: daemon mds.cephfs.compute-2.zwrmjl is now active in filesystem cephfs as rank 0
Jan 22 04:35:06 np0005591760 podman[96012]: 2026-01-22 09:35:06.860384439 +0000 UTC m=+0.031920119 container create e7e32617baeda6719a90244a76a2cedb542c9956bafb89c22bd702653b990d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mds-cephfs-compute-0-xazhzz, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:06 np0005591760 python3[95993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a97c851f780dd9cbca071043ec48c272ce2c754bc3fb19127ced8c2ab89c217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a97c851f780dd9cbca071043ec48c272ce2c754bc3fb19127ced8c2ab89c217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a97c851f780dd9cbca071043ec48c272ce2c754bc3fb19127ced8c2ab89c217/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a97c851f780dd9cbca071043ec48c272ce2c754bc3fb19127ced8c2ab89c217/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.xazhzz supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:06 np0005591760 podman[96012]: 2026-01-22 09:35:06.901348004 +0000 UTC m=+0.072883703 container init e7e32617baeda6719a90244a76a2cedb542c9956bafb89c22bd702653b990d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mds-cephfs-compute-0-xazhzz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:35:06 np0005591760 podman[96012]: 2026-01-22 09:35:06.90570457 +0000 UTC m=+0.077240260 container start e7e32617baeda6719a90244a76a2cedb542c9956bafb89c22bd702653b990d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mds-cephfs-compute-0-xazhzz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:06 np0005591760 bash[96012]: e7e32617baeda6719a90244a76a2cedb542c9956bafb89c22bd702653b990d53
Jan 22 04:35:06 np0005591760 podman[96012]: 2026-01-22 09:35:06.843655753 +0000 UTC m=+0.015191462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:35:06 np0005591760 systemd[1]: Started Ceph mds.cephfs.compute-0.xazhzz for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:06 np0005591760 podman[96027]: 2026-01-22 09:35:06.926670709 +0000 UTC m=+0.030659756 container create 1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:35:06 np0005591760 ceph-mds[96037]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 04:35:06 np0005591760 ceph-mds[96037]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Jan 22 04:35:06 np0005591760 ceph-mds[96037]: main not setting numa affinity
Jan 22 04:35:06 np0005591760 ceph-mds[96037]: pidfile_write: ignore empty --pid-file
Jan 22 04:35:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mds-cephfs-compute-0-xazhzz[96024]: starting mds.cephfs.compute-0.xazhzz at 
Jan 22 04:35:06 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Updating MDS map to version 4 from mon.0
Jan 22 04:35:06 np0005591760 systemd[1]: Started libpod-conmon-1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9.scope.
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.sqikyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.sqikyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 04:35:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ae2c80e3baf83f7ce4abc643d8d786ed52337a65decb7c18329f1c25d48ae8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ae2c80e3baf83f7ce4abc643d8d786ed52337a65decb7c18329f1c25d48ae8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.sqikyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 04:35:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 04:35:06 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.sqikyq on compute-1
Jan 22 04:35:06 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.sqikyq on compute-1
Jan 22 04:35:06 np0005591760 podman[96027]: 2026-01-22 09:35:06.990077394 +0000 UTC m=+0.094066460 container init 1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:06 np0005591760 podman[96027]: 2026-01-22 09:35:06.994699462 +0000 UTC m=+0.098688508 container start 1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:06 np0005591760 podman[96027]: 2026-01-22 09:35:06.995828246 +0000 UTC m=+0.099817292 container attach 1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:35:07 np0005591760 podman[96027]: 2026-01-22 09:35:06.913793934 +0000 UTC m=+0.017783000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 04:35:07 np0005591760 recursing_bartik[96059]: 
Jan 22 04:35:07 np0005591760 recursing_bartik[96059]: [{"container_id": "3a09c1a59b9a", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.10%", "created": "2026-01-22T09:32:48.433517Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T09:34:52.708340Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2026-01-22T09:32:48.371757Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@crash.compute-0", "version": "19.2.3"}, {"container_id": "12642bdb715c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.48%", "created": "2026-01-22T09:33:18.912143Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-22T09:34:52.714354Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2026-01-22T09:33:18.683405Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@crash.compute-1", "version": "19.2.3"}, {"container_id": "90540f6c5eeb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.28%", "created": "2026-01-22T09:34:05.573458Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-22T09:34:52.784093Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2026-01-22T09:34:05.509980Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.xazhzz", "daemon_name": "mds.cephfs.compute-0.xazhzz", "daemon_type": "mds", "events": ["2026-01-22T09:35:06.971766Z daemon:mds.cephfs.compute-0.xazhzz [INFO] \"Deployed mds.cephfs.compute-0.xazhzz on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.zwrmjl", "daemon_name": "mds.cephfs.compute-2.zwrmjl", "daemon_type": "mds", "events": ["2026-01-22T09:35:05.796163Z daemon:mds.cephfs.compute-2.zwrmjl [INFO] \"Deployed mds.cephfs.compute-2.zwrmjl on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "d582143798a4", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.06%", "created": "2026-01-22T09:32:19.193096Z", "daemon_id": "compute-0.rfmoog", "daemon_name": "mgr.compute-0.rfmoog", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T09:34:52.708272Z", "memory_usage": 540226355, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-22T09:32:19.128871Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@mgr.compute-0.rfmoog", "version": "19.2.3"}, {"container_id": "ffb126efa2dd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "43.72%", "created": "2026-01-22T09:34:04.206811Z", "daemon_id": "compute-1.upcmhd", "daemon_name": "mgr.compute-1.upcmhd", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-22T09:34:52.714565Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2026-01-22T09:34:04.149209Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@mgr.compute-1.upcmhd", "version": "19.2.3"}, {"container_id": "cc771d8f677d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "40.07%", "created": "2026-01-22T09:33:59.376399Z", "daemon_id": "compute-2.bisona", "daemon_name": "mgr.compute-2.bisona", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-22T09:34:52.784017Z", "memory_usage": 503421337, "ports": [8765], "service_name": "mgr", "started": "2026-01-22T09:33:59.304887Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@mgr.compute-2.bisona", "version": "19.2.3"}, {"container_id": "1d9f52463946", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "1.79%", "created": "2026-01-22T09:32:16.729072Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-22T09:34:52.708184Z", "memory_request": 2147483648, "memory_usage": 57178849, "ports": [], "service_name": "mon", "started": "2026-01-22T09:32:17.971513Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@mon.compute-0", "version": "19.2.3"}, {"container_id": "f69a31b8e610", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.28%", "created": "2026-01-22T09:33:58.074750Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-22T09:34:52.714499Z", "memory_request": 2147483648, "memory_usage": 43736104, "ports": [], "service_name": "mon", "started": "2026-01-22T09:33:58.017040Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@mon.compute-1", "version": "19.2.3"}, {"container_id": "f08ed0453a48", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "qu
Jan 22 04:35:07 np0005591760 systemd[1]: libpod-1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9.scope: Deactivated successfully.
Jan 22 04:35:07 np0005591760 podman[96027]: 2026-01-22 09:35:07.272848775 +0000 UTC m=+0.376837821 container died 1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-21ae2c80e3baf83f7ce4abc643d8d786ed52337a65decb7c18329f1c25d48ae8-merged.mount: Deactivated successfully.
Jan 22 04:35:07 np0005591760 podman[96027]: 2026-01-22 09:35:07.295185845 +0000 UTC m=+0.399174890 container remove 1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:35:07 np0005591760 systemd[1]: libpod-conmon-1ba3a33883c1cbd6c33edc372b1575ecaf8d907a8335c0df457a5be6f15799e9.scope: Deactivated successfully.
Jan 22 04:35:07 np0005591760 ansible-async_wrapper.py[94596]: Done in kid B.
Jan 22 04:35:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v15: 12 pgs: 12 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e5 new map
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-22T09:35:07:697007+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:35:07.697005+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24346}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24346 members: 24346#012[mds.cephfs.compute-2.zwrmjl{0:24346} state up:active seq 2 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xazhzz{-1:14616} state up:standby seq 1 addr [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] compat {c=[1],r=[1],i=[1fff]}]
Jan 22 04:35:07 np0005591760 rsyslogd[962]: message too long (14810) with configured size 8096, begin of message is: [{"container_id": "3a09c1a59b9a", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 22 04:35:07 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Updating MDS map to version 5 from mon.0
Jan 22 04:35:07 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Monitors have assigned me to become a standby
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] up:active
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] up:boot
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zwrmjl=up:active} 1 up:standby
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.xazhzz"} v 0)
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.xazhzz"}]: dispatch
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e5 all = 0
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e6 new map
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2026-01-22T09:35:07:704688+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:35:07.697005+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24346}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24346 members: 24346#012[mds.cephfs.compute-2.zwrmjl{0:24346} state up:active seq 2 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xazhzz{-1:14616} state up:standby seq 1 addr [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] compat {c=[1],r=[1],i=[1fff]}]
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zwrmjl=up:active} 1 up:standby
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.sqikyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.sqikyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 04:35:07 np0005591760 ceph-mon[74254]: Deploying daemon mds.cephfs.compute-1.sqikyq on compute-1
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:08 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 50036b73-de79-405c-ae00-3f7b708be968 (Updating mds.cephfs deployment (+3 -> 3))
Jan 22 04:35:08 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 50036b73-de79-405c-ae00-3f7b708be968 (Updating mds.cephfs deployment (+3 -> 3)) in 3 seconds
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:08 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 83c7d432-f32c-42b5-9c08-a94547a0101e (Updating alertmanager deployment (+1 -> 1))
Jan 22 04:35:08 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Jan 22 04:35:08 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Jan 22 04:35:08 np0005591760 python3[96120]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.243479685 +0000 UTC m=+0.029211418 container create 9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008 (image=quay.io/ceph/ceph:v19, name=loving_taussig, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:35:08 np0005591760 systemd[1]: Started libpod-conmon-9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008.scope.
Jan 22 04:35:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43032fdf8e1398b34f89d191e132ce77245154a8b9ea534fe1b15ad3cb6c4d38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43032fdf8e1398b34f89d191e132ce77245154a8b9ea534fe1b15ad3cb6c4d38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.298742484 +0000 UTC m=+0.084474236 container init 9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008 (image=quay.io/ceph/ceph:v19, name=loving_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.302480973 +0000 UTC m=+0.088212704 container start 9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008 (image=quay.io/ceph/ceph:v19, name=loving_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.303590831 +0000 UTC m=+0.089322563 container attach 9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008 (image=quay.io/ceph/ceph:v19, name=loving_taussig, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.231433581 +0000 UTC m=+0.017165323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019054252' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 04:35:08 np0005591760 loving_taussig[96184]: 
Jan 22 04:35:08 np0005591760 loving_taussig[96184]: {"fsid":"43df7a30-cf5f-5209-adfd-bf44298b19f2","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":65,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1769074461,"num_in_osds":3,"osd_in_since":1769074446,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":12}],"num_pgs":12,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":84467712,"bytes_avail":64327458816,"bytes_total":64411926528,"read_bytes_sec":16735,"write_bytes_sec":0,"read_op_per_sec":5,"write_op_per_sec":1},"fsmap":{"epoch":6,"btime":"2026-01-22T09:35:07:704688+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.zwrmjl","status":"up:active","gid":24346}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2026-01-22T09:35:05.407453+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.rfmoog":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.upcmhd":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.bisona":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14586":{"start_epoch":4,"start_stamp":"2026-01-22T09:35:03.511091+0000","gid":14586,"addr":"192.168.122.101:0/306523628","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.kjnvpx","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865364","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"30fc5159-0993-4ff0-a95e-d1f2df875388","zone_name":"default","zonegroup_id":"466b069f-ae0a-4d3b-a92d-186e5cb7d7b9","zonegroup_name":"default"},"task_status":{}},"24184":{"start_epoch":3,"start_stamp":"2026-01-22T09:34:52.427888+0000","gid":24184,"addr":"192.168.122.102:0/3366931585","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.aqqfbf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865364","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"30fc5159-0993-4ff0-a95e-d1f2df875388","zone_name":"default","zonegroup_id":"466b069f-ae0a-4d3b-a92d-186e5cb7d7b9","zonegroup_name":"default"},"task_status":{}},"24340":{"start_epoch":4,"start_stamp":"2026-01-22T09:35:04.764929+0000","gid":24340,"addr":"192.168.122.100:0/3169725685","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.kfoyhi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865364","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"30fc5159-0993-4ff0-a95e-d1f2df875388","zone_name":"default","zonegroup_id":"466b069f-ae0a-4d3b-a92d-186e5cb7d7b9","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"50036b73-de79-405c-ae00-3f7b708be968":{"message":"Updating mds.cephfs deployment (+3 -> 3) (2s)\n      [==================..........] (remaining: 1s)","progress":0.66666668653488159,"add_to_ceph_s":true}}}
Jan 22 04:35:08 np0005591760 systemd[1]: libpod-9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008.scope: Deactivated successfully.
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.640194896 +0000 UTC m=+0.425926648 container died 9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008 (image=quay.io/ceph/ceph:v19, name=loving_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:35:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-43032fdf8e1398b34f89d191e132ce77245154a8b9ea534fe1b15ad3cb6c4d38-merged.mount: Deactivated successfully.
Jan 22 04:35:08 np0005591760 podman[96171]: 2026-01-22 09:35:08.660192122 +0000 UTC m=+0.445923854 container remove 9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008 (image=quay.io/ceph/ceph:v19, name=loving_taussig, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:35:08 np0005591760 systemd[1]: libpod-conmon-9212fd3918a79806ee7754a45947e133b6d407f9609e5fba47629a55d7d03008.scope: Deactivated successfully.
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e7 new map
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2026-01-22T09:35:08:985015+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:35:07.697005+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24346}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24346 members: 24346#012[mds.cephfs.compute-2.zwrmjl{0:24346} state up:active seq 2 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xazhzz{-1:14616} state up:standby seq 1 addr [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.sqikyq{-1:24347} state up:standby seq 1 addr [v2:192.168.122.101:6804/2295742283,v1:192.168.122.101:6805/2295742283] compat {c=[1],r=[1],i=[1fff]}]
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2295742283,v1:192.168.122.101:6805/2295742283] up:boot
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zwrmjl=up:active} 2 up:standby
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.sqikyq"} v 0)
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.sqikyq"}]: dispatch
Jan 22 04:35:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e7 all = 0
Jan 22 04:35:09 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:09 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:09 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:09 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:09 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:09 np0005591760 ceph-mon[74254]: Deploying daemon alertmanager.compute-0 on compute-0
Jan 22 04:35:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v16: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.2 KiB/s wr, 173 op/s
Jan 22 04:35:09 np0005591760 python3[96339]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:10.013242136 +0000 UTC m=+0.276812235 container create 669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be (image=quay.io/ceph/ceph:v19, name=sharp_faraday, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.020751272 +0000 UTC m=+1.582199980 volume create 95f9b68f30b297f9dab9352b7d84f11696369b7d93f86f8d5245ed1bdfde9d43
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.024829342 +0000 UTC m=+1.586278051 container create 33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cool_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 systemd[1]: Started libpod-conmon-669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be.scope.
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:09.998035266 +0000 UTC m=+0.261605385 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:10 np0005591760 systemd[1]: Started libpod-conmon-33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95.scope.
Jan 22 04:35:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c84e23f7ce1da5ec77ab374d213329d3039dce68d1c8cf71bfab427ca03f1f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c84e23f7ce1da5ec77ab374d213329d3039dce68d1c8cf71bfab427ca03f1f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.009173214 +0000 UTC m=+1.570621942 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:10.060720758 +0000 UTC m=+0.324290857 container init 669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be (image=quay.io/ceph/ceph:v19, name=sharp_faraday, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:10.064950755 +0000 UTC m=+0.328520855 container start 669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be (image=quay.io/ceph/ceph:v19, name=sharp_faraday, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:10.066235765 +0000 UTC m=+0.329805863 container attach 669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be (image=quay.io/ceph/ceph:v19, name=sharp_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9d9d561098d171e622436e0be179ce5bba4793ee31b49f43fefc1335ab6478/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.078210624 +0000 UTC m=+1.639659332 container init 33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cool_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.083147097 +0000 UTC m=+1.644595804 container start 33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cool_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 cool_kirch[96405]: 65534 65534
Jan 22 04:35:10 np0005591760 systemd[1]: libpod-33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95.scope: Deactivated successfully.
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.084711885 +0000 UTC m=+1.646160593 container attach 33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cool_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.085029145 +0000 UTC m=+1.646477863 container died 33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cool_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4e9d9d561098d171e622436e0be179ce5bba4793ee31b49f43fefc1335ab6478-merged.mount: Deactivated successfully.
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.101421887 +0000 UTC m=+1.662870594 container remove 33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cool_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96239]: 2026-01-22 09:35:10.103007905 +0000 UTC m=+1.664456613 volume remove 95f9b68f30b297f9dab9352b7d84f11696369b7d93f86f8d5245ed1bdfde9d43
Jan 22 04:35:10 np0005591760 systemd[1]: libpod-conmon-33a60a6eb6d4102e8c9c494e3da5256b3423bb04480202f01edd008930a57b95.scope: Deactivated successfully.
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.142741325 +0000 UTC m=+0.025123840 volume create 56b284583e61b726b3e6d2e1de1841812306400498693b337eb0d97723b0625e
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.146846626 +0000 UTC m=+0.029229142 container create 6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_lovelace, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 systemd[1]: Started libpod-conmon-6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945.scope.
Jan 22 04:35:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23303beff8fb62adb99e42a9daa00532db60d6dc55c9787c7415f8d3ede96017/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.203109044 +0000 UTC m=+0.085491560 container init 6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_lovelace, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.207532978 +0000 UTC m=+0.089915494 container start 6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_lovelace, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 angry_lovelace[96453]: 65534 65534
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.208688903 +0000 UTC m=+0.091071419 container attach 6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_lovelace, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 systemd[1]: libpod-6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945.scope: Deactivated successfully.
Jan 22 04:35:10 np0005591760 conmon[96453]: conmon 6e104b0c9d12572d9238 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945.scope/container/memory.events
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.218947327 +0000 UTC m=+0.101329844 container died 6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_lovelace, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.1338375 +0000 UTC m=+0.016220036 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 22 04:35:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-23303beff8fb62adb99e42a9daa00532db60d6dc55c9787c7415f8d3ede96017-merged.mount: Deactivated successfully.
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.237739045 +0000 UTC m=+0.120121561 container remove 6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_lovelace, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96419]: 2026-01-22 09:35:10.239057918 +0000 UTC m=+0.121440444 volume remove 56b284583e61b726b3e6d2e1de1841812306400498693b337eb0d97723b0625e
Jan 22 04:35:10 np0005591760 systemd[1]: libpod-conmon-6e104b0c9d12572d9238e62bc09d602594e45fb0200f558df4d699ce2e497945.scope: Deactivated successfully.
Jan 22 04:35:10 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:10 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:10 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235439910' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 04:35:10 np0005591760 sharp_faraday[96401]: 
Jan 22 04:35:10 np0005591760 sharp_faraday[96401]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.rfmoog/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.upcmhd/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.bisona/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5503675187","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.kfoyhi","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.kjnvpx","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.aqqfbf","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:10.367329455 +0000 UTC m=+0.630899555 container died 669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be (image=quay.io/ceph/ceph:v19, name=sharp_faraday, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:35:10 np0005591760 systemd[1]: libpod-669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be.scope: Deactivated successfully.
Jan 22 04:35:10 np0005591760 podman[96353]: 2026-01-22 09:35:10.481643655 +0000 UTC m=+0.745213754 container remove 669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be (image=quay.io/ceph/ceph:v19, name=sharp_faraday, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:35:10 np0005591760 systemd[1]: libpod-conmon-669d0da39b6386c8f9ae733e04102c6b4b11f6e4151ea18b42eb48f0a0b2e4be.scope: Deactivated successfully.
Jan 22 04:35:10 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:10 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:10 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6c84e23f7ce1da5ec77ab374d213329d3039dce68d1c8cf71bfab427ca03f1f8-merged.mount: Deactivated successfully.
Jan 22 04:35:10 np0005591760 systemd[1]: Starting Ceph alertmanager.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:10 np0005591760 podman[96595]: 2026-01-22 09:35:10.857805648 +0000 UTC m=+0.026535458 volume create 120bc6bac3c0203fa6a9463fa695180f3a238cdf155b832382897c374633ceca
Jan 22 04:35:10 np0005591760 podman[96595]: 2026-01-22 09:35:10.862984097 +0000 UTC m=+0.031713909 container create 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c3e7be19abe1bac5dd2f4deb46b4ef746f96aa4662a2aa6c2367a1f168e3a1/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c3e7be19abe1bac5dd2f4deb46b4ef746f96aa4662a2aa6c2367a1f168e3a1/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:10 np0005591760 podman[96595]: 2026-01-22 09:35:10.897764553 +0000 UTC m=+0.066494363 container init 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 podman[96595]: 2026-01-22 09:35:10.901844235 +0000 UTC m=+0.070574046 container start 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:10 np0005591760 bash[96595]: 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d
Jan 22 04:35:10 np0005591760 podman[96595]: 2026-01-22 09:35:10.848885232 +0000 UTC m=+0.017615063 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 22 04:35:10 np0005591760 systemd[1]: Started Ceph alertmanager.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.921Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.921Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.927Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.26.184 port=9094
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.929Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:10 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 83c7d432-f32c-42b5-9c08-a94547a0101e (Updating alertmanager deployment (+1 -> 1))
Jan 22 04:35:10 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 83c7d432-f32c-42b5-9c08-a94547a0101e (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 22 04:35:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:10 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 864ae1ef-ef90-4a60-945d-830a512c65d0 (Updating grafana deployment (+1 -> 1))
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.962Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.963Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.966Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 22 04:35:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:10.966Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 22 04:35:10 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Jan 22 04:35:10 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e8 new map
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2026-01-22T09:35:11:082871+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:35:10.708017+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24346}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24346 members: 24346#012[mds.cephfs.compute-2.zwrmjl{0:24346} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xazhzz{-1:14616} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.sqikyq{-1:24347} state up:standby seq 1 addr [v2:192.168.122.101:6804/2295742283,v1:192.168.122.101:6805/2295742283] compat {c=[1],r=[1],i=[1fff]}]
Jan 22 04:35:11 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Updating MDS map to version 8 from mon.0
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] up:active
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] up:standby
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zwrmjl=up:active} 2 up:standby
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 22 04:35:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Jan 22 04:35:11 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Jan 22 04:35:11 np0005591760 python3[96699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.401952929 +0000 UTC m=+0.030540450 container create 48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b (image=quay.io/ceph/ceph:v19, name=elastic_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v17: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Jan 22 04:35:11 np0005591760 systemd[1]: Started libpod-conmon-48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b.scope.
Jan 22 04:35:11 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c77d7e0dab341ff11b16e7fed6b955561aa97a7655b7b66a49360cdc9b6cba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c77d7e0dab341ff11b16e7fed6b955561aa97a7655b7b66a49360cdc9b6cba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.458128213 +0000 UTC m=+0.086715753 container init 48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b (image=quay.io/ceph/ceph:v19, name=elastic_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:35:11 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 10 completed events
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.464462258 +0000 UTC m=+0.093049779 container start 48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b (image=quay.io/ceph/ceph:v19, name=elastic_lehmann, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.468034111 +0000 UTC m=+0.096621652 container attach 48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b (image=quay.io/ceph/ceph:v19, name=elastic_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.390513411 +0000 UTC m=+0.019100922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 22 04:35:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379949156' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 22 04:35:11 np0005591760 elastic_lehmann[96723]: mimic
Jan 22 04:35:11 np0005591760 systemd[1]: libpod-48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b.scope: Deactivated successfully.
Jan 22 04:35:11 np0005591760 conmon[96723]: conmon 48778d133f1e5eb98003 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b.scope/container/memory.events
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.749243588 +0000 UTC m=+0.377831109 container died 48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b (image=quay.io/ceph/ceph:v19, name=elastic_lehmann, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:35:11 np0005591760 systemd[1]: var-lib-containers-storage-overlay-68c77d7e0dab341ff11b16e7fed6b955561aa97a7655b7b66a49360cdc9b6cba-merged.mount: Deactivated successfully.
Jan 22 04:35:11 np0005591760 podman[96702]: 2026-01-22 09:35:11.766451111 +0000 UTC m=+0.395038631 container remove 48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b (image=quay.io/ceph/ceph:v19, name=elastic_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:35:11 np0005591760 systemd[1]: libpod-conmon-48778d133f1e5eb98003f60c82792abacb2dfae6fcf8a946bb451f5926231a4b.scope: Deactivated successfully.
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: Regenerating cephadm self-signed grafana TLS certificates
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: Deploying daemon grafana.compute-0 on compute-0
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e9 new map
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2026-01-22T09:35:12:468222+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T09:34:52.434283+0000#012modified#0112026-01-22T09:35:10.708017+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24346}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24346 members: 24346#012[mds.cephfs.compute-2.zwrmjl{0:24346} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3709284713,v1:192.168.122.102:6805/3709284713] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xazhzz{-1:14616} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/4190227067,v1:192.168.122.100:6807/4190227067] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.sqikyq{-1:24347} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2295742283,v1:192.168.122.101:6805/2295742283] compat {c=[1],r=[1],i=[1fff]}]
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2295742283,v1:192.168.122.101:6805/2295742283] up:standby
Jan 22 04:35:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zwrmjl=up:active} 2 up:standby
Jan 22 04:35:12 np0005591760 python3[96865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:12.929Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000714302s
Jan 22 04:35:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.286940531 +0000 UTC m=+0.521748667 container create 63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7 (image=quay.io/ceph/ceph:v19, name=nervous_banach, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.270714404 +0000 UTC m=+0.505522560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:13 np0005591760 systemd[1]: Started libpod-conmon-63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7.scope.
Jan 22 04:35:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df71e6782a6da57bc69e30e8c87ae7d2238646df5e340059ed18f2347e3dba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df71e6782a6da57bc69e30e8c87ae7d2238646df5e340059ed18f2347e3dba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.361707763 +0000 UTC m=+0.596515919 container init 63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7 (image=quay.io/ceph/ceph:v19, name=nervous_banach, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.383835127 +0000 UTC m=+0.618643263 container start 63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7 (image=quay.io/ceph/ceph:v19, name=nervous_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.401353227 +0000 UTC m=+0.636161363 container attach 63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7 (image=quay.io/ceph/ceph:v19, name=nervous_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:35:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v18: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Jan 22 04:35:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 22 04:35:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721182781' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 22 04:35:13 np0005591760 nervous_banach[96948]: 
Jan 22 04:35:13 np0005591760 nervous_banach[96948]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Jan 22 04:35:13 np0005591760 systemd[1]: libpod-63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7.scope: Deactivated successfully.
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.770418402 +0000 UTC m=+1.005226537 container died 63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7 (image=quay.io/ceph/ceph:v19, name=nervous_banach, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:13 np0005591760 systemd[1]: var-lib-containers-storage-overlay-05df71e6782a6da57bc69e30e8c87ae7d2238646df5e340059ed18f2347e3dba-merged.mount: Deactivated successfully.
Jan 22 04:35:13 np0005591760 podman[96884]: 2026-01-22 09:35:13.798640347 +0000 UTC m=+1.033448483 container remove 63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7 (image=quay.io/ceph/ceph:v19, name=nervous_banach, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:13 np0005591760 systemd[1]: libpod-conmon-63f29dfd1e5ab2f369513a678b97419477b8c0786711df9580ee7ee48c789ce7.scope: Deactivated successfully.
Jan 22 04:35:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v19: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.846308375 +0000 UTC m=+5.304017999 container create 8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833 (image=quay.io/ceph/grafana:10.4.0, name=wonderful_vaughan, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.835914864 +0000 UTC m=+5.293624488 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 22 04:35:16 np0005591760 systemd[1]: Started libpod-conmon-8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833.scope.
Jan 22 04:35:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.907134991 +0000 UTC m=+5.364844616 container init 8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833 (image=quay.io/ceph/grafana:10.4.0, name=wonderful_vaughan, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.911545671 +0000 UTC m=+5.369255285 container start 8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833 (image=quay.io/ceph/grafana:10.4.0, name=wonderful_vaughan, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.912614091 +0000 UTC m=+5.370323705 container attach 8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833 (image=quay.io/ceph/grafana:10.4.0, name=wonderful_vaughan, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:16 np0005591760 wonderful_vaughan[97033]: 472 0
Jan 22 04:35:16 np0005591760 systemd[1]: libpod-8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833.scope: Deactivated successfully.
Jan 22 04:35:16 np0005591760 conmon[97033]: conmon 8734147619c4d521ec24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833.scope/container/memory.events
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.914479677 +0000 UTC m=+5.372189292 container died 8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833 (image=quay.io/ceph/grafana:10.4.0, name=wonderful_vaughan, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a5d37098e35425c43e8e7c6c6cc8b3d604f5d8facea463b2b6495ab1bff33f19-merged.mount: Deactivated successfully.
Jan 22 04:35:16 np0005591760 podman[96751]: 2026-01-22 09:35:16.931676149 +0000 UTC m=+5.389385763 container remove 8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833 (image=quay.io/ceph/grafana:10.4.0, name=wonderful_vaughan, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:16 np0005591760 systemd[1]: libpod-conmon-8734147619c4d521ec243cfa4099fc1b39e13ba29c211a0fdd19ac673f419833.scope: Deactivated successfully.
Jan 22 04:35:16 np0005591760 podman[97046]: 2026-01-22 09:35:16.977672328 +0000 UTC m=+0.029645418 container create 6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 systemd[1]: Started libpod-conmon-6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84.scope.
Jan 22 04:35:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:17 np0005591760 podman[97046]: 2026-01-22 09:35:17.03344416 +0000 UTC m=+0.085417239 container init 6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 podman[97046]: 2026-01-22 09:35:17.037297664 +0000 UTC m=+0.089270745 container start 6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 podman[97046]: 2026-01-22 09:35:17.038380882 +0000 UTC m=+0.090353962 container attach 6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 stupefied_fermat[97061]: 472 0
Jan 22 04:35:17 np0005591760 systemd[1]: libpod-6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84.scope: Deactivated successfully.
Jan 22 04:35:17 np0005591760 podman[97046]: 2026-01-22 09:35:17.039842375 +0000 UTC m=+0.091815455 container died 6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a2b8bde435116de364ae47d789c0b25d28edb4f6424f7d4397f9f5d0bbeea099-merged.mount: Deactivated successfully.
Jan 22 04:35:17 np0005591760 podman[97046]: 2026-01-22 09:35:16.965191682 +0000 UTC m=+0.017164772 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 22 04:35:17 np0005591760 podman[97046]: 2026-01-22 09:35:17.063735567 +0000 UTC m=+0.115708647 container remove 6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84 (image=quay.io/ceph/grafana:10.4.0, name=stupefied_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 systemd[1]: libpod-conmon-6fb87f951e6f9eb0ddcda94a1db1b254955824b8c203c1bf96286104f3b99b84.scope: Deactivated successfully.
Jan 22 04:35:17 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:17 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:17 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:17 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:17 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:17 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v20: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Jan 22 04:35:17 np0005591760 systemd[1]: Starting Ceph grafana.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:17 np0005591760 podman[97193]: 2026-01-22 09:35:17.680599676 +0000 UTC m=+0.031142769 container create 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefb6c753069f7f794f2fb13c3601a582a95ad793ea1c99c0ee4b1eefa1be134/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefb6c753069f7f794f2fb13c3601a582a95ad793ea1c99c0ee4b1eefa1be134/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefb6c753069f7f794f2fb13c3601a582a95ad793ea1c99c0ee4b1eefa1be134/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefb6c753069f7f794f2fb13c3601a582a95ad793ea1c99c0ee4b1eefa1be134/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefb6c753069f7f794f2fb13c3601a582a95ad793ea1c99c0ee4b1eefa1be134/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:17 np0005591760 podman[97193]: 2026-01-22 09:35:17.724669533 +0000 UTC m=+0.075212636 container init 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 podman[97193]: 2026-01-22 09:35:17.731852513 +0000 UTC m=+0.082395595 container start 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:17 np0005591760 bash[97193]: 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e
Jan 22 04:35:17 np0005591760 podman[97193]: 2026-01-22 09:35:17.667020073 +0000 UTC m=+0.017563175 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 22 04:35:17 np0005591760 systemd[1]: Started Ceph grafana.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:17 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 864ae1ef-ef90-4a60-945d-830a512c65d0 (Updating grafana deployment (+1 -> 1))
Jan 22 04:35:17 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 864ae1ef-ef90-4a60-945d-830a512c65d0 (Updating grafana deployment (+1 -> 1)) in 7 seconds
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:17 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev e9e01123-47b1-4ae9-945e-0f5393a5b032 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Jan 22 04:35:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:17 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.duivti on compute-0
Jan 22 04:35:17 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.duivti on compute-0
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.87959454Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-22T09:35:17Z
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.881122939Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.88157251Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.881701433Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.881799668Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.881879781Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.881952819Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.8820222Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882091771Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882180519Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882252475Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882324901Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882405655Z level=info msg=Target target=[all]
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882505683Z level=info msg="Path Home" path=/usr/share/grafana
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882576348Z level=info msg="Path Data" path=/var/lib/grafana
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882648674Z level=info msg="Path Logs" path=/var/log/grafana
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882721792Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.882805872Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=settings t=2026-01-22T09:35:17.88287903Z level=info msg="App mode production"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=sqlstore t=2026-01-22T09:35:17.883252856Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=sqlstore t=2026-01-22T09:35:17.883352906Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.883887736Z level=info msg="Starting DB migrations"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.885208353Z level=info msg="Executing migration" id="create migration_log table"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.886202283Z level=info msg="Migration successfully executed" id="create migration_log table" duration=993.518µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.887391401Z level=info msg="Executing migration" id="create user table"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.888109078Z level=info msg="Migration successfully executed" id="create user table" duration=717.657µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.88892491Z level=info msg="Executing migration" id="add unique index user.login"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.889540383Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=615.183µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.890248272Z level=info msg="Executing migration" id="add unique index user.email"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.890856892Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=609.972µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.891504687Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.892091667Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=586.749µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.892823591Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.893407886Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=583.995µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.894228957Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.896135973Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.907486ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.896819064Z level=info msg="Executing migration" id="create user table v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.897466379Z level=info msg="Migration successfully executed" id="create user table v2" duration=647.113µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.899048399Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.899613868Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=565.228µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.900251133Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.900817444Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=566.091µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.901442706Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.901801043Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=358.006µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.902345392Z level=info msg="Executing migration" id="Drop old table user_v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.90281455Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=468.806µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.903400358Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.90432712Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=926.383µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.904948994Z level=info msg="Executing migration" id="Update user table charset"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.905001925Z level=info msg="Migration successfully executed" id="Update user table charset" duration=53.441µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.905721175Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.906569799Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=848.233µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.907236699Z level=info msg="Executing migration" id="Add missing user data"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.907434634Z level=info msg="Migration successfully executed" id="Add missing user data" duration=198.215µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.908059756Z level=info msg="Executing migration" id="Add is_disabled column to user"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.90894033Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=880.614µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.909550453Z level=info msg="Executing migration" id="Add index user.login/user.email"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.910164636Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=613.932µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.910761023Z level=info msg="Executing migration" id="Add is_service_account column to user"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.91163779Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=876.797µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.912308168Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.918483763Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.175565ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.919263006Z level=info msg="Executing migration" id="Add uid column to user"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.920415835Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.153651ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.921213383Z level=info msg="Executing migration" id="Update uid column values for users"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.921493083Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=280.361µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.922303044Z level=info msg="Executing migration" id="Add unique index user_uid"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.923053473Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=749.957µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.923749498Z level=info msg="Executing migration" id="create temp user table v1-7"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.924377726Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=627.987µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.925121682Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.925687242Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=565.369µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.926372006Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.927009753Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=637.556µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.927661134Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.928248645Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=586.208µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.928974776Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.929549644Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=574.667µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.93020878Z level=info msg="Executing migration" id="Update temp_user table charset"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.930261319Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=52.148µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.930967745Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.93153625Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=566.761µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.93237728Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.933318669Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=941.149µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.934008123Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.934587679Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=579.406µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.935258637Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.935911651Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=652.773µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.936562392Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.939023445Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.460401ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.939720112Z level=info msg="Executing migration" id="create temp_user v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.940370372Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=649.968µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.940996165Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.941572795Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=576.39µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.942290751Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.942895145Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=603.903µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.943527039Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.944321411Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=791.786µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.945060599Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.945832808Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=771.92µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.946620448Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.947059818Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=438.939µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.947723944Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.948335861Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=611.777µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.948992192Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.949320923Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=328.621µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.94999652Z level=info msg="Executing migration" id="create star table"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.950509942Z level=info msg="Migration successfully executed" id="create star table" duration=513.141µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.951120596Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.951707996Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=596.127µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.952382842Z level=info msg="Executing migration" id="create org table v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.95296336Z level=info msg="Migration successfully executed" id="create org table v1" duration=580.208µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.95362977Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.954234362Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=604.392µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.954910541Z level=info msg="Executing migration" id="create org_user table v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.955451083Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=540.481µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.956134564Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.95674509Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=610.494µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.957496189Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.958151468Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=656.361µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.95888774Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.95950148Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=613.23µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.960146079Z level=info msg="Executing migration" id="Update org table charset"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.960196474Z level=info msg="Migration successfully executed" id="Update org table charset" duration=50.857µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.960831154Z level=info msg="Executing migration" id="Update org_user table charset"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.960880728Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=50.114µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.961580431Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.961738049Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=156.366µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.962442851Z level=info msg="Executing migration" id="create dashboard table"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.963064296Z level=info msg="Migration successfully executed" id="create dashboard table" duration=621.114µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.96369028Z level=info msg="Executing migration" id="add index dashboard.account_id"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.964355197Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=664.637µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.965014423Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.965667417Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=652.613µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.966438225Z level=info msg="Executing migration" id="create dashboard_tag table"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.966978866Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=540.492µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.967588258Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.968242936Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=654.448µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.969073186Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.969884821Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=811.523µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.970720231Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.97559181Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.87148ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.976293076Z level=info msg="Executing migration" id="create dashboard v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.976950599Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=657.273µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.977562025Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.978179863Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=617.708µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.978832086Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.979454082Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=620.824µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.98012993Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.980444975Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=314.824µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.981109862Z level=info msg="Executing migration" id="drop table dashboard_v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.981942327Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=832.093µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.982603085Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.982683588Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=80.893µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.983363453Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.984629627Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.265824ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.985665786Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.986951337Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.283758ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.987594121Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.98910607Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.511799ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.989962249Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.990592991Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=631.814µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.99127487Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.992524533Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.249573ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.993216461Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.993857032Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=640.692µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.994518383Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.995143063Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=624.841µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.995771371Z level=info msg="Executing migration" id="Update dashboard table charset"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.995837397Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=66.446µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.996535476Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.996586302Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=51.157µs
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.997269584Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.998610349Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.340354ms
Jan 22 04:35:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:17.999333075Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.000651116Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.317641ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.001715128Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.003130946Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.415587ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.003802635Z level=info msg="Executing migration" id="Add column uid in dashboard"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.005183506Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.38062ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.005819798Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.00603159Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=211.711µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.00671416Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.007347506Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=633.066µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.007988539Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.008593342Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=604.683µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.009933926Z level=info msg="Executing migration" id="Update dashboard title length"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.009991355Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=58.31µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.010763996Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.011415007Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=651.101µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.012109119Z level=info msg="Executing migration" id="create dashboard_provisioning"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.012665902Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=557.305µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.013305681Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.016753328Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.447057ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.017450076Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.01802333Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=573.024µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.018675913Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.019313199Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=637.005µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.020000198Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.020614249Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=613.971µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.021284316Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Jan 22 04:35:18 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.02156727Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=282.825µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.022179858Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.02263048Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=450.402µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.023266864Z level=info msg="Executing migration" id="Add check_sum column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.024625722Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.358549ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.025431646Z level=info msg="Executing migration" id="Add index for dashboard_title"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.026033253Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=601.166µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.02661906Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.026776358Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=157.258µs
Jan 22 04:35:18 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.036889077Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.037030595Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=142.139µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.037799358Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.038372121Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=572.332µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.038907824Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.040544799Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.634469ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.041368576Z level=info msg="Executing migration" id="create data_source table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.042308734Z level=info msg="Migration successfully executed" id="create data_source table" duration=939.966µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.043094269Z level=info msg="Executing migration" id="add index data_source.account_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.043941381Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=846.741µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.044684856Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.045384639Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=699.243µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.046022584Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.046669588Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=646.743µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.047296473Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.048032094Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=735.212µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.04866475Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.052589069Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=3.923628ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.053321063Z level=info msg="Executing migration" id="create data_source table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.054030384Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=709.07µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.05467287Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.055414701Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=741.692µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.056021288Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.056721593Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=699.964µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.05742931Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.057927623Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=498.243µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.058558555Z level=info msg="Executing migration" id="Add column with_credentials"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.06019547Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.636655ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.061036661Z level=info msg="Executing migration" id="Add secure json data column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.062581992Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.545572ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.063223124Z level=info msg="Executing migration" id="Update data_source table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.063243853Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=21.13µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.063872642Z level=info msg="Executing migration" id="Update initial version to 1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.06402483Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=151.998µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.065027876Z level=info msg="Executing migration" id="Add read_only data column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.067078964Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.056167ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.067815125Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.068011006Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=195.86µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.068852266Z level=info msg="Executing migration" id="Update json_data with nulls"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.069031435Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=179.158µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.069766766Z level=info msg="Executing migration" id="Add uid column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.072013614Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.246267ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.072750416Z level=info msg="Executing migration" id="Update uid value"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.072997663Z level=info msg="Migration successfully executed" id="Update uid value" duration=247.198µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.073961796Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.074829787Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=867.49µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.075548396Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.07634932Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=800.644µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.077125758Z level=info msg="Executing migration" id="create api_key table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.077928185Z level=info msg="Migration successfully executed" id="create api_key table" duration=802.247µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.078754718Z level=info msg="Executing migration" id="add index api_key.account_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.079573806Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=817.726µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.080373909Z level=info msg="Executing migration" id="add index api_key.key"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.081160927Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=787.059µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.081895325Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.082729982Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=834.236µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.083528252Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.084381044Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=861.408µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.085150639Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.08593904Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=788.17µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.086641097Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.087439176Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=797.668µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.088095737Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.094874003Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.774007ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.095774535Z level=info msg="Executing migration" id="create api_key table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.096442618Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=666.23µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.097088569Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.097768254Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=679.645µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.098463599Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.099132253Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=668.323µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.099773225Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.100464271Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=690.695µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.101310401Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.101616328Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=303.304µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.102266718Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.10273811Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=471.561µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.103395222Z level=info msg="Executing migration" id="Update api_key table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.103473309Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=79.721µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.106090438Z level=info msg="Executing migration" id="Add expires to api_key table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.107736981Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.646243ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.108449187Z level=info msg="Executing migration" id="Add service account foreign key"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.110144833Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.693522ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.110802507Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.110940747Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=138.331µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.111626474Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.113520144Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.89316ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.114253711Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.115944578Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.690496ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.116659419Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.117279061Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=619.442µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.118019019Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.118480211Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=460.952µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.119121304Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.120094874Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=981.156µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.120758037Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.121468411Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=710.304µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.122150951Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.122874559Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=723.377µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.123536351Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.12426068Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=724.189µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.124925457Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.124986903Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=61.567µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.12571003Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.125744887Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=35.658µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.126491828Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.128342157Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.849868ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.12899986Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.130863554Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.863182ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.131552075Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.131616226Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=62.178µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.132376965Z level=info msg="Executing migration" id="create quota table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.132967291Z level=info msg="Migration successfully executed" id="create quota table v1" duration=590.014µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.133586702Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.134286123Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=700.623µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.135077971Z level=info msg="Executing migration" id="Update quota table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.135111855Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=34.105µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.135850391Z level=info msg="Executing migration" id="create plugin_setting table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.136451488Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=599.122µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.137124379Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.137811028Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=686.398µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.138513065Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.140681595Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.16835ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.14159405Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.141614428Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=20.98µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.143068308Z level=info msg="Executing migration" id="create session table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.143899058Z level=info msg="Migration successfully executed" id="create session table" duration=830.63µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.144952449Z level=info msg="Executing migration" id="Drop old table playlist table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.145074249Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=122.242µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.145993427Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.146094048Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=100.842µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.14698943Z level=info msg="Executing migration" id="create playlist table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.1477671Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=777.23µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.148830171Z level=info msg="Executing migration" id="create playlist item table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.149531728Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=701.106µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.150562305Z level=info msg="Executing migration" id="Update playlist table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.150582493Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=20.568µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.153226833Z level=info msg="Executing migration" id="Update playlist_item table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.153295462Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=71.755µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.154304921Z level=info msg="Executing migration" id="Add playlist column created_at"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.156647781Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.3433ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.157502977Z level=info msg="Executing migration" id="Add playlist column updated_at"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.159575195Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.071817ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.160252365Z level=info msg="Executing migration" id="drop preferences table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.160357784Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=105.72µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.161012452Z level=info msg="Executing migration" id="drop preferences table v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.161112381Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=100.74µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.161845897Z level=info msg="Executing migration" id="create preferences table v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.162480107Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=633.728µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.163172625Z level=info msg="Executing migration" id="Update preferences table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.163226237Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=52.82µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.163919448Z level=info msg="Executing migration" id="Add column team_id in preferences"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.166222802Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.302432ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.1669142Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.167069694Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=153.5µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.167756723Z level=info msg="Executing migration" id="Add column week_start in preferences"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.169850381Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.093358ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.170532601Z level=info msg="Executing migration" id="Add column preferences.json_data"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.173055651Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.520075ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.173733883Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.173829924Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=96.712µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.174612844Z level=info msg="Executing migration" id="Add preferences index org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.175351641Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=738.496µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.176542813Z level=info msg="Executing migration" id="Add preferences index user_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.177272342Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=729.619µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.180913306Z level=info msg="Executing migration" id="create alert table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.181823657Z level=info msg="Migration successfully executed" id="create alert table v1" duration=910.381µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.183005773Z level=info msg="Executing migration" id="add index alert org_id & id "
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.183765829Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=759.937µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.185012435Z level=info msg="Executing migration" id="add index alert state"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.18562884Z level=info msg="Migration successfully executed" id="add index alert state" duration=616.095µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.186311713Z level=info msg="Executing migration" id="add index alert dashboard_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.186937525Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=625.452µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.1875273Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.188057022Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=529.17µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.188740103Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.189411252Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=670.627µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.190073173Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.190869158Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=795.525µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.191497616Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.197914649Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=6.415879ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.198590456Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.199161616Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=570.88µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.199807797Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.200422901Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=646.943µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.201045457Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.201290832Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=245.124µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.201902097Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.202356376Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=454.398µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.203051089Z level=info msg="Executing migration" id="create alert_notification table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.203625224Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=573.805µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.204259464Z level=info msg="Executing migration" id="Add column is_default"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.206524506Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.2645ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.207202738Z level=info msg="Executing migration" id="Add column frequency"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.209613385Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.410547ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.210271921Z level=info msg="Executing migration" id="Add column send_reminder"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.213085781Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.81332ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.213726722Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.21600525Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.278188ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.21659215Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.217222862Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=630.351µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.217890674Z level=info msg="Executing migration" id="Update alert table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.217910833Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=20.609µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.218660079Z level=info msg="Executing migration" id="Update alert_notification table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.218678614Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=17.984µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.2193587Z level=info msg="Executing migration" id="create notification_journal table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.219916665Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=557.514µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.220495349Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.221134948Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=639.258µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.221722409Z level=info msg="Executing migration" id="drop alert_notification_journal"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.222356257Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=634.581µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.222980517Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.223580932Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=600.235µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.224217737Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.224850282Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=632.345µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.225409911Z level=info msg="Executing migration" id="Add for to alert table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.22768954Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.279359ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.228429679Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.23076788Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.33789ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.231382021Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.231515965Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=133.714µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.232197633Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.232821302Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=624.441µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.233440283Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.23406322Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=622.948µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.234659036Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.237038234Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.378949ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.237669488Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.237715986Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=46.559µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.238399498Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.23901928Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=619.521µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.239616048Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.240323926Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=707.769µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.240942295Z level=info msg="Executing migration" id="Drop old annotation table v4"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.241011466Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=68.42µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.241699718Z level=info msg="Executing migration" id="create annotation table v5"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.242364985Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=663.595µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.242940713Z level=info msg="Executing migration" id="add index annotation 0 v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.243548443Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=607.339µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.244160089Z level=info msg="Executing migration" id="add index annotation 1 v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.244751467Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=591.048µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.24538301Z level=info msg="Executing migration" id="add index annotation 2 v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.246004316Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=620.955µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.246748893Z level=info msg="Executing migration" id="add index annotation 3 v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.247442664Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=693.39µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.24820699Z level=info msg="Executing migration" id="add index annotation 4 v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.248939414Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=732.014µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.249594292Z level=info msg="Executing migration" id="Update annotation table charset"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.249609051Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=15.329µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.250285058Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.253192214Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=2.907016ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.253857012Z level=info msg="Executing migration" id="Drop category_id index"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.254470863Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=613.66µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.255069705Z level=info msg="Executing migration" id="Add column tags to annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.257557307Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.488915ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.25823582Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.258735174Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=499.325µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.259305281Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.259952285Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=647.023µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.260546238Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.261188252Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=639.99µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.261900158Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.268872119Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=6.971319ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.269540873Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.270080443Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=539.27µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.270670298Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.271356125Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=684.093µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.271990454Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.272235799Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=245.034µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.272873183Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.27332131Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=448.158µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.273994663Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.274136061Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=141.317µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.274800366Z level=info msg="Executing migration" id="Add created time to annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.277346831Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.545884ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.277961192Z level=info msg="Executing migration" id="Add updated time to annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.280461911Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.500388ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.281108222Z level=info msg="Executing migration" id="Add index for created in annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.281736881Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=628.256µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.282363295Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.282978809Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=615.154µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.283569245Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.283727194Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=159.041µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.284500386Z level=info msg="Executing migration" id="Add epoch_end column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.287080694Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.580228ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.287702329Z level=info msg="Executing migration" id="Add index for epoch_end"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.28833781Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=634.971µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.288948305Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.289072209Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=124.065µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.289710164Z level=info msg="Executing migration" id="Move region to single row"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.28997209Z level=info msg="Migration successfully executed" id="Move region to single row" duration=261.795µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.290616007Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.291289851Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=673.753µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.292088871Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.29271183Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=622.788µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.293501733Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.294160289Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=658.365µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.294740996Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.295386276Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=644.458µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.295955212Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.296565516Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=610.123µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.297185106Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.297809167Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=623.519µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.298366139Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.298411265Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=45.406µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.299157956Z level=info msg="Executing migration" id="create test_data table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.299741039Z level=info msg="Migration successfully executed" id="create test_data table" duration=582.612µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.300391499Z level=info msg="Executing migration" id="create dashboard_version table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.300990111Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=598.362µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.301628126Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.302278506Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=650.23µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.302938564Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.303585578Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=646.653µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.304214807Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.304347788Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=133.151µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.304937112Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.305260634Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=322.259µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.30588295Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.305926272Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=43.784µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.306622589Z level=info msg="Executing migration" id="create team table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.307197195Z level=info msg="Migration successfully executed" id="create team table" duration=573.464µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.307836785Z level=info msg="Executing migration" id="add index team.org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.308553771Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=715.192µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.30922551Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.309887552Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=661.871µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.310527321Z level=info msg="Executing migration" id="Add column uid in team"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.31334594Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.818168ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.314007761Z level=info msg="Executing migration" id="Update uid column values in team"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.314148507Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=140.826µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.314794769Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.315444658Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=649.568µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.316081742Z level=info msg="Executing migration" id="create team member table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.316641431Z level=info msg="Migration successfully executed" id="create team member table" duration=559.399µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.317232608Z level=info msg="Executing migration" id="add index team_member.org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.317867768Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=634.93µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.318477442Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.319132639Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=655.419µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.319726112Z level=info msg="Executing migration" id="add index team_member.team_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.320398714Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=672.473µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.321021491Z level=info msg="Executing migration" id="Add column email to team table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.324069133Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.047021ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.324719012Z level=info msg="Executing migration" id="Add column external to team_member table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.327748169Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.028776ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.328408166Z level=info msg="Executing migration" id="Add column permission to team_member table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.331350189Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=2.942724ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.331911198Z level=info msg="Executing migration" id="create dashboard acl table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.332637943Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=726.423µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.3333184Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.334013854Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=695.163µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.334615522Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.335369097Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=753.185µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.336098315Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.336764786Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=665.849µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.337361193Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.33801564Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=654.096µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.338635131Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.339417441Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=781.838µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.340038724Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.340706026Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=666.961µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.341341748Z level=info msg="Executing migration" id="add index dashboard_permission"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.342026142Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=683.953µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.342593635Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.34305597Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=462.054µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.343743328Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.343920805Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=177.336µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.34456935Z level=info msg="Executing migration" id="create tag table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.345159416Z level=info msg="Migration successfully executed" id="create tag table" duration=589.795µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.345765511Z level=info msg="Executing migration" id="add index tag.key_value"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.346438063Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=673.974µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.347207828Z level=info msg="Executing migration" id="create login attempt table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.347743982Z level=info msg="Migration successfully executed" id="create login attempt table" duration=535.683µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.348389924Z level=info msg="Executing migration" id="add index login_attempt.username"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.349046394Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=656.492µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.349616302Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.350288683Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=672.181µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.350924014Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.359570641Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=8.646036ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.360290343Z level=info msg="Executing migration" id="create login_attempt v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.360853567Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=562.963µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.361462898Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.362106496Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=644.859µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.362719344Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.362977061Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=257.427µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.363501142Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.364003332Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=501.478µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.36461597Z level=info msg="Executing migration" id="create user auth table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.365178412Z level=info msg="Migration successfully executed" id="create user auth table" duration=562.142µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.365770051Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.366449226Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=678.673µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.36714989Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.367195266Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=44.635µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.368013092Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.371271893Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.260163ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.371925499Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.375056459Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.130369ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.375769046Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.378897651Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.128004ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.379591822Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.38273263Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.140417ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.383405794Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.384076421Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=670.338µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.384651639Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.387858702Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.204809ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.388479686Z level=info msg="Executing migration" id="create server_lock table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.389076845Z level=info msg="Migration successfully executed" id="create server_lock table" duration=596.928µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.389727315Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.390419152Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=692.038µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.391108386Z level=info msg="Executing migration" id="create user auth token table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.391726975Z level=info msg="Migration successfully executed" id="create user auth token table" duration=618.529µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.392386402Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.393164824Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=778.031µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.393728829Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.39440581Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=676.51µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.395049176Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.395827237Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=777.289µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.396476475Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.399876472Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.399436ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.400558211Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.401247295Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=688.884µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.401859973Z level=info msg="Executing migration" id="create cache_data table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.402497659Z level=info msg="Migration successfully executed" id="create cache_data table" duration=637.446µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.403149862Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.403820709Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=670.457µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.404459527Z level=info msg="Executing migration" id="create short_url table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.4051117Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=651.022µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.405761649Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.406473755Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=712.147µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.407155694Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.40720091Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=45.567µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.407838896Z level=info msg="Executing migration" id="delete alert_definition table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.407905452Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=66.646µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.408553697Z level=info msg="Executing migration" id="recreate alert_definition table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.409192545Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=638.708µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.409806906Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.410501781Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=693.281µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.411319316Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.412024489Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=704.481µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.412829131Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.412874265Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=45.465µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.413562758Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.414244587Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=681.719µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.414837527Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.415497976Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=660.288µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.416108219Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.416823302Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=714.812µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.417381968Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.41808642Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=704.092µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.418676746Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.422364508Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=3.687432ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.422989049Z level=info msg="Executing migration" id="drop alert_definition table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.42374103Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=751.73µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.424349401Z level=info msg="Executing migration" id="delete alert_definition_version table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.424416307Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=67.227µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.425089189Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.425728137Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=638.808µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.426359641Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.42707855Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=717.146µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.427682753Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.428406451Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=723.068µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.428970527Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.429017446Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=47.28µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.429670611Z level=info msg="Executing migration" id="drop alert_definition_version table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.430415258Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=743.996µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.431171649Z level=info msg="Executing migration" id="create alert_instance table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.431862274Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=690.194µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.432558359Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.433306034Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=747.384µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.43389665Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.434588178Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=691.236µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.435189675Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.438940937Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=3.750731ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.439583252Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.440258378Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=674.957µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.440858072Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.44152881Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=670.377µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.442154782Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.461960397Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=19.805283ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.462660861Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.480071668Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=17.410556ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.480798281Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.481495059Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=696.536µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.482256689Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.482933668Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=676.469µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.483602102Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.48707113Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.467206ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.487667898Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.491130235Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.461854ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.491752441Z level=info msg="Executing migration" id="create alert_rule table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.492453496Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=701.676µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.493167336Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.493897947Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=730.3µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.494596147Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.495326078Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=729.469µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.495937994Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.49670286Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=764.455µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.497444252Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.497489899Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=47.381µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.498258702Z level=info msg="Executing migration" id="add column for to alert_rule"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.501972202Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=3.712979ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.502625497Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.506250171Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=3.624333ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.506915919Z level=info msg="Executing migration" id="add column labels to alert_rule"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.510490326Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=3.574297ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.511094609Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.511765237Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=670.508µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.512464188Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.513199248Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=734.649µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.513836091Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.517424645Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=3.588134ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.518099292Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.521668188Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=3.568406ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.522348715Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.523057375Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=708.118µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.523814086Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.527382082Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=3.565931ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.528072908Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.53173921Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=3.667384ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.53246455Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.532510868Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=46.588µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.53323147Z level=info msg="Executing migration" id="create alert_rule_version table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.534043195Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=811.964µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.534682544Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.535397716Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=714.791µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.536062172Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.536839772Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=777.088µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.537482137Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.537528635Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=45.547µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.538233046Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.54201153Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=3.77665ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.542724547Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.54647582Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=3.750962ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.547234033Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.550974165Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=3.739811ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.551873413Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.555609678Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=3.736043ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.55633512Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.560111129Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=3.775689ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.560747651Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.560813055Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=65.664µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.561531874Z level=info msg="Executing migration" id=create_alert_configuration_table
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.56213799Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=605.885µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.562762171Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.566643247Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=3.880586ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.567310759Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.567355334Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=44.926µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.568081487Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.571987511Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=3.905934ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.572614536Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.573328436Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=713.489µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.573971993Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.577898406Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=3.925812ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.578599542Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.579187554Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=587.712µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.579816342Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.580536233Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=719.761µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.581160092Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.585043104Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=3.882391ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.585701398Z level=info msg="Executing migration" id="create provenance_type table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.586296664Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=595.015µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.586974655Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.587691431Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=717.407µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.588355757Z level=info msg="Executing migration" id="create alert_image table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.588938168Z level=info msg="Migration successfully executed" id="create alert_image table" duration=582.201µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.589539164Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.59025074Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=711.395µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.59091758Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.590963476Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.148µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.591743181Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.592441201Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=697.6µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.593113021Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.593823174Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=709.922µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.594458464Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.594707656Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.595342826Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.595681597Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=338.541µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.596306468Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.597004769Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=698.06µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.597631534Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.601689024Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.057ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.602355043Z level=info msg="Executing migration" id="create library_element table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.603168933Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=813.638µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.603841154Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.604582054Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=740.199µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.605221022Z level=info msg="Executing migration" id="create library_element_connection table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.605830003Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=609.632µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.606462969Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.607232926Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=769.565µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.607887564Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.608580863Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=692.689µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.609222667Z level=info msg="Executing migration" id="increase max description length to 2048"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.609241022Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=18.755µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.609865363Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.609909556Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=44.433µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.610650277Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.610878718Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=226.89µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.61155125Z level=info msg="Executing migration" id="create data_keys table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.612288824Z level=info msg="Migration successfully executed" id="create data_keys table" duration=737.334µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.612971084Z level=info msg="Executing migration" id="create secrets table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.613567883Z level=info msg="Migration successfully executed" id="create secrets table" duration=596.648µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.614206269Z level=info msg="Executing migration" id="rename data_keys name column to id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.637078009Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=22.870908ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.63773959Z level=info msg="Executing migration" id="add name column into data_keys"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.642229499Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.489198ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.642884307Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.642995738Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=111.792µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.643679861Z level=info msg="Executing migration" id="rename data_keys name column to label"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.666572993Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=22.892821ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.667371893Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.690159906Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=22.787601ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.690914131Z level=info msg="Executing migration" id="create kv_store table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.691560794Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=646.422µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.692275605Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.69305591Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=778.681µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.693733141Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.693899696Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=166.605µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.694582426Z level=info msg="Executing migration" id="create permission table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.69526662Z level=info msg="Migration successfully executed" id="create permission table" duration=683.943µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.695934463Z level=info msg="Executing migration" id="add unique index permission.role_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.696628244Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=693.791µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.697306977Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.698046165Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=738.768µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.698682307Z level=info msg="Executing migration" id="create role table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.699336054Z level=info msg="Migration successfully executed" id="create role table" duration=653.476µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.699990291Z level=info msg="Executing migration" id="add column display_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.704682671Z level=info msg="Migration successfully executed" id="add column display_name" duration=4.692132ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.705381452Z level=info msg="Executing migration" id="add column group_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.709974757Z level=info msg="Migration successfully executed" id="add column group_name" duration=4.593025ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.710596552Z level=info msg="Executing migration" id="add index role.org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.71135162Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=754.866µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.711976411Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.712735005Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=758.223µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.713384202Z level=info msg="Executing migration" id="add index role_org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.714132016Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=748.314µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.714743973Z level=info msg="Executing migration" id="create team role table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.715422135Z level=info msg="Migration successfully executed" id="create team role table" duration=678.272µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.716031828Z level=info msg="Executing migration" id="add index team_role.org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.71681026Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=776.519µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.717483111Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.718299054Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=816.833µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.71896325Z level=info msg="Executing migration" id="add index team_role.team_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.719671849Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=708.219µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.720306149Z level=info msg="Executing migration" id="create user role table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.720924257Z level=info msg="Migration successfully executed" id="create user role table" duration=617.798µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.721756662Z level=info msg="Executing migration" id="add index user_role.org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.722503193Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=746.241µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.72323149Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.723966028Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=733.297µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.724560783Z level=info msg="Executing migration" id="add index user_role.user_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.725318404Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=756.81µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.725985937Z level=info msg="Executing migration" id="create builtin role table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.726585861Z level=info msg="Migration successfully executed" id="create builtin role table" duration=599.524µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.727222845Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.727966351Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=743.195µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.730016587Z level=info msg="Executing migration" id="add index builtin_role.name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.730745505Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=728.708µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.731368614Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.736399625Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.030771ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.737039625Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.737770586Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=730.401µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.738451705Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.739222271Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=770.256µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.739904741Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.740616577Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=711.666µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.74126867Z level=info msg="Executing migration" id="add unique index role.uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.741996306Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=727.385µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.742574619Z level=info msg="Executing migration" id="create seed assignment table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.743143815Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=568.885µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.743761854Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.744525667Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=763.312µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.745298128Z level=info msg="Executing migration" id="add column hidden to role table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.75028155Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=4.98298ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.750941317Z level=info msg="Executing migration" id="permission kind migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.755823918Z level=info msg="Migration successfully executed" id="permission kind migration" duration=4.882281ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.75650192Z level=info msg="Executing migration" id="permission attribute migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.761276506Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=4.774296ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.761979175Z level=info msg="Executing migration" id="permission identifier migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.766747941Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=4.768575ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.767495204Z level=info msg="Executing migration" id="add permission identifier index"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.768360619Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=862.221µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.769010419Z level=info msg="Executing migration" id="add permission action scope role_id index"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.769845116Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=834.327µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.770472783Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.771209015Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=736.001µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.771837242Z level=info msg="Executing migration" id="create query_history table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.772488694Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=650.97µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.773163239Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.773895434Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=731.453µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.774606828Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.774656573Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=50.435µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.775527158Z level=info msg="Executing migration" id="rbac disabled migrator"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.775558338Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=32.602µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.776230308Z level=info msg="Executing migration" id="teams permissions migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.77652805Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=297.453µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.777210261Z level=info msg="Executing migration" id="dashboard permissions"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.777577675Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=367.884µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.778537179Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.779021675Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=484.316µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.779836836Z level=info msg="Executing migration" id="drop managed folder create actions"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.780000125Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=163.359µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.780973174Z level=info msg="Executing migration" id="alerting notification permissions"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.781314349Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=341.124µs
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:18 np0005591760 ceph-mon[74254]: Deploying daemon haproxy.rgw.default.compute-0.duivti on compute-0
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.782590873Z level=info msg="Executing migration" id="create query_history_star table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.784396467Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.804842ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.785161564Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.786038721Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=888.451µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.786793208Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.791812878Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.01964ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.792508643Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.792590729Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=82.527µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.793405689Z level=info msg="Executing migration" id="create correlation table v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.794244355Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=838.996µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.795090735Z level=info msg="Executing migration" id="add index correlations.uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.795892841Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=801.646µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.79656904Z level=info msg="Executing migration" id="add index correlations.source_uid"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.797363743Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=794.462µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.798264325Z level=info msg="Executing migration" id="add correlation config column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.803462863Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.198088ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.804160011Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.804955825Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=795.544µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.80559834Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.806510515Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=911.994µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.807199567Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.821922092Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=14.722624ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.822735039Z level=info msg="Executing migration" id="create correlation v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.823632705Z level=info msg="Migration successfully executed" id="create correlation v2" duration=897.326µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.82451247Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.825271073Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=758.604µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.825859346Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.826661742Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=802.106µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.827339634Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.828086195Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=746.491µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.828699855Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.828905274Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=203.734µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.829575842Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.830218337Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=642.275µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.830801017Z level=info msg="Executing migration" id="add provisioning column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.835917732Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.116564ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.836530911Z level=info msg="Executing migration" id="create entity_events table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.837163227Z level=info msg="Migration successfully executed" id="create entity_events table" duration=631.785µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.837793006Z level=info msg="Executing migration" id="create dashboard public config v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.838543585Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=750.538µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.839301378Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.839671097Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.84037034Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.840652182Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.841275631Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.841910571Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=634.529µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.842517458Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.843271955Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=754.046µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.843950358Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.844686589Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=735.842µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.845386403Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.846185433Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=798.54µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.846805235Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.847661784Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=856.63µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.848368521Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.849217936Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=850.398µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.849855031Z level=info msg="Executing migration" id="Drop public config table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.850520249Z level=info msg="Migration successfully executed" id="Drop public config table" duration=665.076µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.852799448Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.853634546Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=835.068µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.854309312Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.855157195Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=847.552µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.85577354Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.856634478Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=860.467µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.857303844Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.858153098Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=849.546µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.858731462Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.876908677Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=18.176674ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.877737625Z level=info msg="Executing migration" id="add annotations_enabled column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.88328356Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=5.545845ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.884026054Z level=info msg="Executing migration" id="add time_selection_enabled column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.889236806Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.211914ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.889965984Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.89016431Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=199.196µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.890772519Z level=info msg="Executing migration" id="add share column"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.895838326Z level=info msg="Migration successfully executed" id="add share column" duration=5.065355ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.896519844Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.896697841Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=177.656µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.897496511Z level=info msg="Executing migration" id="create file table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.898206303Z level=info msg="Migration successfully executed" id="create file table" duration=709.572µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.898902589Z level=info msg="Executing migration" id="file table idx: path natural pk"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.899756012Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=852.983µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.900387036Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.901236131Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=850.027µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.90190195Z level=info msg="Executing migration" id="create file_meta table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.90248893Z level=info msg="Migration successfully executed" id="create file_meta table" duration=586.81µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.903142314Z level=info msg="Executing migration" id="file table idx: path key"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.90406565Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=923.105µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.904697204Z level=info msg="Executing migration" id="set path collation in file table"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.904771394Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=74.38µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.905497818Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.905568772Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=71.455µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.906225123Z level=info msg="Executing migration" id="managed permissions migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.90665184Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=425.103µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.90736627Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.907574354Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=207.853µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.908269969Z level=info msg="Executing migration" id="RBAC action name migrator"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.909388183Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.117874ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.910160172Z level=info msg="Executing migration" id="Add UID column to playlist"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.91549574Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=5.335256ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.916172629Z level=info msg="Executing migration" id="Update uid column values in playlist"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.91632111Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=148.411µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.916961461Z level=info msg="Executing migration" id="Add index for uid in playlist"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.918072512Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.11072ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.918749862Z level=info msg="Executing migration" id="update group index for alert rules"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.919064798Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=315.337µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.919759982Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.919969928Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=209.696µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.920667908Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.921068977Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=400.929µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.921707674Z level=info msg="Executing migration" id="add action column to seed_assignment"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.927447375Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=5.737827ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.928075543Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.933465803Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=5.389098ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.934109651Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.934937816Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=828.596µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.935814444Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.99707987Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=61.264865ms
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.998107354Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.998987196Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=879.563µs
Jan 22 04:35:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:18.999670799Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.000529151Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=858.083µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.001276395Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.019143694Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=17.866718ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.019975207Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.025369073Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=5.392405ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.029507898Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.02975775Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=250.072µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.030494945Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.03066712Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=172.055µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.031346945Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.031550451Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=204.818µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.032281983Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.032479677Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=197.583µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.033188957Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.033386201Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=197.253µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.034100821Z level=info msg="Executing migration" id="create folder table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.034791026Z level=info msg="Migration successfully executed" id="create folder table" duration=673.263µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.035431517Z level=info msg="Executing migration" id="Add index for parent_uid"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.036397594Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=965.786µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.037055829Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.038100292Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.044223ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.038969636Z level=info msg="Executing migration" id="Update folder title length"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.038989694Z level=info msg="Migration successfully executed" id="Update folder title length" duration=20.629µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.039724424Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.04061141Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=886.556µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.04137784Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.042154839Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=776.758µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.042754954Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.043597195Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=841.811µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.044262623Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.044600833Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=338.089µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.045207801Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.045407678Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=199.648µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.046026358Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.046811121Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=784.353µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.047502248Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.048320355Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=817.747µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.048959002Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.049697608Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=737.123µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.050321027Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.05113181Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=810.322µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.051717337Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.052493705Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=775.998µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.053149495Z level=info msg="Executing migration" id="create anon_device table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.05376036Z level=info msg="Migration successfully executed" id="create anon_device table" duration=610.664µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.054445946Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.055312183Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=865.947µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.055996197Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.056746355Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=749.657µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.057405872Z level=info msg="Executing migration" id="create signing_key table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.058102638Z level=info msg="Migration successfully executed" id="create signing_key table" duration=696.617µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.058765141Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.059552009Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=786.838µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.060418376Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.061195646Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=777.18µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.061744093Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.06196412Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=220.537µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.062846588Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.068443899Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=5.59687ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.069116753Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.069689975Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=573.785µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.070367076Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.071170134Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=804.171µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.071736725Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.072545885Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=809.11µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.073213577Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.073996147Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=781.077µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.075102498Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.077091399Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.98837ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.077873477Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.078858049Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=984.211µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.079541671Z level=info msg="Executing migration" id="create sso_setting table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.080354999Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=812.956µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.080989508Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.08156211Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=573.214µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.082219533Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.082412609Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=193.506µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.083071925Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.083113544Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=41.83µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.083803989Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.08929999Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=5.49586ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.089927857Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.095537552Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=5.609374ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.096148959Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.096410714Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=260.091µs
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=migrator t=2026-01-22T09:35:19.097063176Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.211898145s
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=sqlstore t=2026-01-22T09:35:19.098045864Z level=info msg="Created default organization"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=secrets t=2026-01-22T09:35:19.098918825Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=plugin.store t=2026-01-22T09:35:19.117160292Z level=info msg="Loading plugins..."
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=local.finder t=2026-01-22T09:35:19.176245207Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=plugin.store t=2026-01-22T09:35:19.176311252Z level=info msg="Plugins loaded" count=55 duration=59.150659ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=query_data t=2026-01-22T09:35:19.178451768Z level=info msg="Query Service initialization"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=live.push_http t=2026-01-22T09:35:19.18150448Z level=info msg="Live Push Gateway initialization"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.migration t=2026-01-22T09:35:19.183300685Z level=info msg=Starting
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.migration t=2026-01-22T09:35:19.183623536Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.migration orgID=1 t=2026-01-22T09:35:19.18395914Z level=info msg="Migrating alerts for organisation"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.migration orgID=1 t=2026-01-22T09:35:19.184458043Z level=info msg="Alerts found to migrate" alerts=0
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.migration t=2026-01-22T09:35:19.185828825Z level=info msg="Completed alerting migration"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.state.manager t=2026-01-22T09:35:19.198152563Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=infra.usagestats.collector t=2026-01-22T09:35:19.199488819Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=provisioning.datasources t=2026-01-22T09:35:19.200290395Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=provisioning.alerting t=2026-01-22T09:35:19.208156026Z level=info msg="starting to provision alerting"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=provisioning.alerting t=2026-01-22T09:35:19.208172036Z level=info msg="finished to provision alerting"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=grafanaStorageLogger t=2026-01-22T09:35:19.209381944Z level=info msg="Storage starting"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=http.server t=2026-01-22T09:35:19.209753997Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=http.server t=2026-01-22T09:35:19.210026532Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.state.manager t=2026-01-22T09:35:19.210084171Z level=info msg="Warming state cache for startup"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.state.manager t=2026-01-22T09:35:19.22535894Z level=info msg="State cache has been initialized" states=0 duration=15.274017ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.multiorg.alertmanager t=2026-01-22T09:35:19.225478355Z level=info msg="Starting MultiOrg Alertmanager"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ngalert.scheduler t=2026-01-22T09:35:19.225500016Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ticker t=2026-01-22T09:35:19.225526486Z level=info msg=starting first_tick=2026-01-22T09:35:20Z
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=provisioning.dashboard t=2026-01-22T09:35:19.225942032Z level=info msg="starting to provision dashboards"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=grafana.update.checker t=2026-01-22T09:35:19.269455277Z level=info msg="Update check succeeded" duration=61.186718ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=plugins.update.checker t=2026-01-22T09:35:19.270252033Z level=info msg="Update check succeeded" duration=61.262161ms
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=sqlstore.transactions t=2026-01-22T09:35:19.285581726Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=provisioning.dashboard t=2026-01-22T09:35:19.393496076Z level=info msg="finished to provision dashboards"
Jan 22 04:35:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v21: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=grafana-apiserver t=2026-01-22T09:35:19.69135494Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 22 04:35:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=grafana-apiserver t=2026-01-22T09:35:19.691725962Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 22 04:35:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:35:20.933Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004620552s
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v22: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 11 completed events
Jan 22 04:35:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:35:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:35:21 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:35:22 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.486698518 +0000 UTC m=+4.296476223 container create 7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e (image=quay.io/ceph/haproxy:2.3, name=eager_carver)
Jan 22 04:35:22 np0005591760 systemd[1]: Started libpod-conmon-7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e.scope.
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.476588935 +0000 UTC m=+4.286366650 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 04:35:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.535289603 +0000 UTC m=+4.345067319 container init 7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e (image=quay.io/ceph/haproxy:2.3, name=eager_carver)
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.540107131 +0000 UTC m=+4.349884826 container start 7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e (image=quay.io/ceph/haproxy:2.3, name=eager_carver)
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.541349369 +0000 UTC m=+4.351127055 container attach 7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e (image=quay.io/ceph/haproxy:2.3, name=eager_carver)
Jan 22 04:35:22 np0005591760 eager_carver[97404]: 0 0
Jan 22 04:35:22 np0005591760 systemd[1]: libpod-7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e.scope: Deactivated successfully.
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.544473977 +0000 UTC m=+4.354251671 container died 7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e (image=quay.io/ceph/haproxy:2.3, name=eager_carver)
Jan 22 04:35:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4df9d985853dad937cabce8104a0be3402a6416993d4503374a7c79e3d2c0fbb-merged.mount: Deactivated successfully.
Jan 22 04:35:22 np0005591760 podman[97305]: 2026-01-22 09:35:22.56147579 +0000 UTC m=+4.371253485 container remove 7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e (image=quay.io/ceph/haproxy:2.3, name=eager_carver)
Jan 22 04:35:22 np0005591760 systemd[1]: libpod-conmon-7473de781e984e811017772aefb252672fc7cb0b69604c085ac56212f4ecf81e.scope: Deactivated successfully.
Jan 22 04:35:22 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:22 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:22 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:22 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:22 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:22 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:23 np0005591760 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.duivti for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:23 np0005591760 podman[97537]: 2026-01-22 09:35:23.168399015 +0000 UTC m=+0.028984549 container create fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:35:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1de9ddd8038100fb3d558ce3339e23c2c9b54c411596fc373517fef2598dc30/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:23 np0005591760 podman[97537]: 2026-01-22 09:35:23.202439932 +0000 UTC m=+0.063025466 container init fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:35:23 np0005591760 podman[97537]: 2026-01-22 09:35:23.2062896 +0000 UTC m=+0.066875134 container start fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:35:23 np0005591760 bash[97537]: fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9
Jan 22 04:35:23 np0005591760 podman[97537]: 2026-01-22 09:35:23.15581819 +0000 UTC m=+0.016403743 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 04:35:23 np0005591760 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.duivti for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti[97549]: [NOTICE] 021/093523 (2) : New worker #1 (4) forked
Jan 22 04:35:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000030s ======
Jan 22 04:35:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:23.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000030s
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 22 04:35:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.czpvbf on compute-2
Jan 22 04:35:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.czpvbf on compute-2
Jan 22 04:35:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v23: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 22 04:35:24 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:24 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:24 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:24 np0005591760 ceph-mon[74254]: Deploying daemon haproxy.rgw.default.compute-2.czpvbf on compute-2
Jan 22 04:35:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v24: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Jan 22 04:35:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:26 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:35:26 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:35:26 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:35:26 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:35:26 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.idkctu on compute-0
Jan 22 04:35:26 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.idkctu on compute-0
Jan 22 04:35:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v25: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:35:27 np0005591760 ceph-mon[74254]: Deploying daemon keepalived.rgw.default.compute-0.idkctu on compute-0
Jan 22 04:35:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 04:35:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:28.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 04:35:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:29.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v26: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 04:35:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:30.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 04:35:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:31.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v27: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.677411876 +0000 UTC m=+4.819451137 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.690149218 +0000 UTC m=+4.832188459 container create f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_pascal, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 22 04:35:31 np0005591760 systemd[1]: Started libpod-conmon-f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611.scope.
Jan 22 04:35:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.746535089 +0000 UTC m=+4.888574341 container init f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_pascal, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4)
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.752526526 +0000 UTC m=+4.894565767 container start f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_pascal, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20)
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.753979312 +0000 UTC m=+4.896018554 container attach f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_pascal, architecture=x86_64, name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, version=2.2.4, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 04:35:31 np0005591760 compassionate_pascal[97721]: 0 0
Jan 22 04:35:31 np0005591760 systemd[1]: libpod-f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611.scope: Deactivated successfully.
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.757284852 +0000 UTC m=+4.899324093 container died f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_pascal, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived)
Jan 22 04:35:31 np0005591760 systemd[1]: var-lib-containers-storage-overlay-696812d1e8a52a21a11d74e2c5422912a4eb478dd5c579f541522fa1787000f5-merged.mount: Deactivated successfully.
Jan 22 04:35:31 np0005591760 podman[97642]: 2026-01-22 09:35:31.775455024 +0000 UTC m=+4.917494265 container remove f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_pascal, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64, vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=)
Jan 22 04:35:31 np0005591760 systemd[1]: libpod-conmon-f84cb85a1ab9529667a668645f1850649632dda171f035929e645b4b8b4d9611.scope: Deactivated successfully.
Jan 22 04:35:31 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:31 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:31 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:32 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:32 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:32 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:32 np0005591760 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.idkctu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:32 np0005591760 podman[97855]: 2026-01-22 09:35:32.504671072 +0000 UTC m=+0.028960342 container create 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vcs-type=git, vendor=Red Hat, Inc.)
Jan 22 04:35:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f63b77fc1bba840fbbe1f232b5184c5d0e7050f5eccf07c79cf98eacbf375a/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:32 np0005591760 podman[97855]: 2026-01-22 09:35:32.54833914 +0000 UTC m=+0.072628430 container init 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Jan 22 04:35:32 np0005591760 podman[97855]: 2026-01-22 09:35:32.552326468 +0000 UTC m=+0.076615739 container start 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, name=keepalived, distribution-scope=public, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., release=1793, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 04:35:32 np0005591760 bash[97855]: 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329
Jan 22 04:35:32 np0005591760 podman[97855]: 2026-01-22 09:35:32.492892884 +0000 UTC m=+0.017182175 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 04:35:32 np0005591760 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.idkctu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 22 04:35:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: Starting VRRP child process, pid=4
Jan 22 04:35:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: Startup complete
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: (VI_0) Entering BACKUP STATE (init)
Jan 22 04:35:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 22 04:35:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:32 2026: VRRP_Script(check_backend) succeeded
Jan 22 04:35:32 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:35:32 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:35:32 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:35:32 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:35:32 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.udkjbg on compute-2
Jan 22 04:35:32 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.udkjbg on compute-2
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:35:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:35:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v28: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:35:33 np0005591760 ceph-mon[74254]: Deploying daemon keepalived.rgw.default.compute-2.udkjbg on compute-2
Jan 22 04:35:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:35.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v29: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:36 2026: (VI_0) Entering MASTER STATE
Jan 22 04:35:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:36.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev e9e01123-47b1-4ae9-945e-0f5393a5b032 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 22 04:35:36 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event e9e01123-47b1-4ae9-945e-0f5393a5b032 (Updating ingress.rgw.default deployment (+4 -> 4)) in 19 seconds
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 8b1a0efb-9d53-40af-955e-a7cbbfc47ae5 (Updating prometheus deployment (+1 -> 1))
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:36 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Jan 22 04:35:36 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Jan 22 04:35:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:37.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v30: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:37 np0005591760 ceph-mon[74254]: Deploying daemon prometheus.compute-0 on compute-0
Jan 22 04:35:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:38.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:35:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:35:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v31: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:35:40 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 22 04:35:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:40.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:41.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.390002671 +0000 UTC m=+4.217158471 volume create 2f2a43471eea014e1cbd149504802aa120060cf123ce295b9f7f4df5cd87d8cd
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.395194871 +0000 UTC m=+4.222350670 container create 2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988 (image=quay.io/prometheus/prometheus:v2.51.0, name=loving_chatelet, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.379449815 +0000 UTC m=+4.206605636 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 22 04:35:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v32: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:41 np0005591760 systemd[1]: Started libpod-conmon-2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988.scope.
Jan 22 04:35:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfcc5f0f68da83ac8f9ffc0715532b283f16b5e3d48a0bd1982882208283183/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.463184333 +0000 UTC m=+4.290340153 container init 2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988 (image=quay.io/prometheus/prometheus:v2.51.0, name=loving_chatelet, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.468888087 +0000 UTC m=+4.296043876 container start 2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988 (image=quay.io/prometheus/prometheus:v2.51.0, name=loving_chatelet, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.470585066 +0000 UTC m=+4.297740856 container attach 2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988 (image=quay.io/prometheus/prometheus:v2.51.0, name=loving_chatelet, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 loving_chatelet[98175]: 65534 65534
Jan 22 04:35:41 np0005591760 systemd[1]: libpod-2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988.scope: Deactivated successfully.
Jan 22 04:35:41 np0005591760 conmon[98175]: conmon 2062b9de03c1db8efdf8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988.scope/container/memory.events
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.473150182 +0000 UTC m=+4.300305982 container died 2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988 (image=quay.io/prometheus/prometheus:v2.51.0, name=loving_chatelet, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 12 completed events
Jan 22 04:35:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:35:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6bfcc5f0f68da83ac8f9ffc0715532b283f16b5e3d48a0bd1982882208283183-merged.mount: Deactivated successfully.
Jan 22 04:35:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.500020231 +0000 UTC m=+4.327176031 container remove 2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988 (image=quay.io/prometheus/prometheus:v2.51.0, name=loving_chatelet, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[97961]: 2026-01-22 09:35:41.501414259 +0000 UTC m=+4.328570059 volume remove 2f2a43471eea014e1cbd149504802aa120060cf123ce295b9f7f4df5cd87d8cd
Jan 22 04:35:41 np0005591760 systemd[1]: libpod-conmon-2062b9de03c1db8efdf85d3719e2204b26731e317ed8d11026b2b1630fb55988.scope: Deactivated successfully.
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.57277193 +0000 UTC m=+0.036585256 volume create 9ce4f23258404b29aacc58a18060fbb60fb3b2d4710867c8827a0206636b1abb
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.580338797 +0000 UTC m=+0.044152132 container create 1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 systemd[1]: Started libpod-conmon-1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c.scope.
Jan 22 04:35:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36bd35161230c6a726a0eff98b0fee6a2c96e19f5f8a0dc4d7ff71e29d60f08a/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.635173849 +0000 UTC m=+0.098987204 container init 1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.640543773 +0000 UTC m=+0.104357108 container start 1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 reverent_borg[98203]: 65534 65534
Jan 22 04:35:41 np0005591760 systemd[1]: libpod-1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c.scope: Deactivated successfully.
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.643131612 +0000 UTC m=+0.106944947 container attach 1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.643493454 +0000 UTC m=+0.107306799 container died 1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.561158556 +0000 UTC m=+0.024971911 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 22 04:35:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-36bd35161230c6a726a0eff98b0fee6a2c96e19f5f8a0dc4d7ff71e29d60f08a-merged.mount: Deactivated successfully.
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.665303834 +0000 UTC m=+0.129117168 container remove 1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:41 np0005591760 podman[98190]: 2026-01-22 09:35:41.667240164 +0000 UTC m=+0.131053509 volume remove 9ce4f23258404b29aacc58a18060fbb60fb3b2d4710867c8827a0206636b1abb
Jan 22 04:35:41 np0005591760 systemd[1]: libpod-conmon-1e0ccbab67962500daf07389cad6e6fc101af40ff9bb32f82c992279f24e6e7c.scope: Deactivated successfully.
Jan 22 04:35:41 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:41 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:41 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:41 np0005591760 systemd[1]: Reloading.
Jan 22 04:35:41 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:35:42 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:35:42 np0005591760 systemd[1]: Starting Ceph prometheus.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:35:42 np0005591760 podman[98335]: 2026-01-22 09:35:42.384101981 +0000 UTC m=+0.033852695 container create a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2026357ed7315797fa98725335677726a799bac5e18dc3eff67f35715b70c4/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2026357ed7315797fa98725335677726a799bac5e18dc3eff67f35715b70c4/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:42 np0005591760 podman[98335]: 2026-01-22 09:35:42.433886682 +0000 UTC m=+0.083637415 container init a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:42 np0005591760 podman[98335]: 2026-01-22 09:35:42.438666394 +0000 UTC m=+0.088417106 container start a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:42 np0005591760 bash[98335]: a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132
Jan 22 04:35:42 np0005591760 podman[98335]: 2026-01-22 09:35:42.369944259 +0000 UTC m=+0.019694993 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 22 04:35:42 np0005591760 systemd[1]: Started Ceph prometheus.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.474Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.474Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.474Z caller=main.go:623 level=info host_details="(Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 x86_64 compute-0 (none))"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.474Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.474Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.476Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Jan 22 04:35:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:42.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.482Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.486Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.486Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.489Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.489Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.693µs
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.489Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.490Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.490Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=32.03µs wal_replay_duration=1.052995ms wbl_replay_duration=151ns total_replay_duration=1.111175ms
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.492Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.492Z caller=main.go:1153 level=info msg="TSDB started"
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.492Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:42 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 8b1a0efb-9d53-40af-955e-a7cbbfc47ae5 (Updating prometheus deployment (+1 -> 1))
Jan 22 04:35:42 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 8b1a0efb-9d53-40af-955e-a7cbbfc47ae5 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.519Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=27.414135ms db_storage=1.483µs remote_storage=2.626µs web_handler=491ns query_engine=1.252µs scrape=3.999009ms scrape_sd=251.695µs notify=34.845µs notify_sd=40.588µs rules=22.602192ms tracing=8.737µs
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.519Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Jan 22 04:35:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0[98347]: ts=2026-01-22T09:35:42.520Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Jan 22 04:35:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.102811) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543102890, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5827, "num_deletes": 253, "total_data_size": 11664792, "memory_usage": 12409536, "flush_reason": "Manual Compaction"}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543126095, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 10292976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 122, "largest_seqno": 5944, "table_properties": {"data_size": 10272895, "index_size": 12607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6405, "raw_key_size": 60643, "raw_average_key_size": 23, "raw_value_size": 10223653, "raw_average_value_size": 4018, "num_data_blocks": 560, "num_entries": 2544, "num_filter_entries": 2544, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074338, "oldest_key_time": 1769074338, "file_creation_time": 1769074543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 23345 microseconds, and 14130 cpu microseconds.
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.126150) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 10292976 bytes OK
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.126197) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.126815) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.126828) EVENT_LOG_v1 {"time_micros": 1769074543126825, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.126851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 11639170, prev total WAL file size 11639170, number of live WAL files 2.
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.128493) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(10051KB) 13(45KB) 8(1944B)]
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543128604, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 10341628, "oldest_snapshot_seqno": -1}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2349 keys, 10323830 bytes, temperature: kUnknown
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543153413, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 10323830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10304100, "index_size": 12773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 5893, "raw_key_size": 59192, "raw_average_key_size": 25, "raw_value_size": 10256585, "raw_average_value_size": 4366, "num_data_blocks": 569, "num_entries": 2349, "num_filter_entries": 2349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769074543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.153631) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 10323830 bytes
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.154019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 415.5 rd, 414.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(9.9, 0.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2637, records dropped: 288 output_compression: NoCompression
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.154035) EVENT_LOG_v1 {"time_micros": 1769074543154026, "job": 4, "event": "compaction_finished", "compaction_time_micros": 24891, "compaction_time_cpu_micros": 19209, "output_level": 6, "num_output_files": 1, "total_output_size": 10323830, "num_input_records": 2637, "num_output_records": 2349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543155450, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543155559, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074543155648, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:35:43.128404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:35:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:43.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v33: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  1: '-n'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  2: 'mgr.compute-0.rfmoog'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  3: '-f'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  4: '--setuser'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  5: 'ceph'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  6: '--setgroup'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  7: 'ceph'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  8: '--default-log-to-file=false'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  9: '--default-log-to-journald=true'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr respawn  exe_path /proc/self/exe
Jan 22 04:35:43 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.rfmoog(active, since 52s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:35:43 np0005591760 systemd[1]: session-35.scope: Deactivated successfully.
Jan 22 04:35:43 np0005591760 systemd[1]: session-35.scope: Consumed 35.845s CPU time.
Jan 22 04:35:43 np0005591760 systemd-logind[747]: Session 35 logged out. Waiting for processes to exit.
Jan 22 04:35:43 np0005591760 systemd-logind[747]: Removed session 35.
Jan 22 04:35:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setuser ceph since I am not root
Jan 22 04:35:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ignoring --setgroup ceph since I am not root
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: pidfile_write: ignore empty --pid-file
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'alerts'
Jan 22 04:35:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:43.729+0000 7ff3e283e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'balancer'
Jan 22 04:35:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:43.802+0000 7ff3e283e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 04:35:43 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'cephadm'
Jan 22 04:35:44 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'crash'
Jan 22 04:35:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:44.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:44 np0005591760 ceph-mon[74254]: from='mgr.14496 192.168.122.100:0/2579283684' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 22 04:35:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:44.563+0000 7ff3e283e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:35:44 np0005591760 ceph-mgr[74522]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 04:35:44 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'dashboard'
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'devicehealth'
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:45.192+0000 7ff3e283e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 04:35:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:45.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  from numpy import show_config as show_numpy_config
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:45.340+0000 7ff3e283e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'influx'
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:45.404+0000 7ff3e283e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'insights'
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'iostat'
Jan 22 04:35:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:45.524+0000 7ff3e283e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'k8sevents'
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'localpool'
Jan 22 04:35:45 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'mirroring'
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'nfs'
Jan 22 04:35:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:46.391+0000 7ff3e283e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'orchestrator'
Jan 22 04:35:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:46.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:46.615+0000 7ff3e283e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 04:35:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:46.690+0000 7ff3e283e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'osd_support'
Jan 22 04:35:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:46.754+0000 7ff3e283e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 04:35:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:46.828+0000 7ff3e283e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'progress'
Jan 22 04:35:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:46.896+0000 7ff3e283e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 04:35:46 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'prometheus'
Jan 22 04:35:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:47.202+0000 7ff3e283e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rbd_support'
Jan 22 04:35:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:47.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:47.290+0000 7ff3e283e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'restful'
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rgw'
Jan 22 04:35:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:47.668+0000 7ff3e283e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 04:35:47 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'rook'
Jan 22 04:35:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.153+0000 7ff3e283e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'selftest'
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.216+0000 7ff3e283e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'snap_schedule'
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.286+0000 7ff3e283e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'stats'
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'status'
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.416+0000 7ff3e283e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telegraf'
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.478+0000 7ff3e283e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'telemetry'
Jan 22 04:35:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.616+0000 7ff3e283e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 04:35:48 np0005591760 python3[98421]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.720306232 +0000 UTC m=+0.042025843 container create 561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f (image=quay.io/ceph/ceph:v19, name=modest_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:35:48 np0005591760 systemd[1]: Started libpod-conmon-561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f.scope.
Jan 22 04:35:48 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5c96143961943756e1ea0def1225848045247a3e102bb97166569de2baf240/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:48 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5c96143961943756e1ea0def1225848045247a3e102bb97166569de2baf240/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.788196638 +0000 UTC m=+0.109916269 container init 561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f (image=quay.io/ceph/ceph:v19, name=modest_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.796872044 +0000 UTC m=+0.118591655 container start 561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f (image=quay.io/ceph/ceph:v19, name=modest_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.798360049 +0000 UTC m=+0.120079661 container attach 561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f (image=quay.io/ceph/ceph:v19, name=modest_noyce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.705958502 +0000 UTC m=+0.027678133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona restarted
Jan 22 04:35:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.bisona started
Jan 22 04:35:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:48.820+0000 7ff3e283e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 04:35:48 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'volumes'
Jan 22 04:35:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.rfmoog(active, since 57s), standbys: compute-1.upcmhd, compute-2.bisona
Jan 22 04:35:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd restarted
Jan 22 04:35:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.upcmhd started
Jan 22 04:35:48 np0005591760 modest_noyce[98435]: could not fetch user info: no user info saved
Jan 22 04:35:48 np0005591760 systemd[1]: libpod-561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f.scope: Deactivated successfully.
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.957533312 +0000 UTC m=+0.279252924 container died 561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f (image=quay.io/ceph/ceph:v19, name=modest_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:48 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9d5c96143961943756e1ea0def1225848045247a3e102bb97166569de2baf240-merged.mount: Deactivated successfully.
Jan 22 04:35:48 np0005591760 podman[98423]: 2026-01-22 09:35:48.983025874 +0000 UTC m=+0.304745484 container remove 561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f (image=quay.io/ceph/ceph:v19, name=modest_noyce, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:35:48 np0005591760 systemd[1]: libpod-conmon-561877d1768c403163c73e43b5ac42d04baafc31c13243aa8a6d8e9f36f3dd5f.scope: Deactivated successfully.
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.076+0000 7ff3e283e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr[py] Loading python module 'zabbix'
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.144+0000 7ff3e283e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rfmoog restarted
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rfmoog
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: ms_deliver_dispatch: unhandled message 0x560ee3c87860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map Activating!
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr handle_mgr_map I am now activating
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.rfmoog(active, starting, since 0.0243392s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: balancer
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Starting
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Manager daemon compute-0.rfmoog is now available
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:35:49
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: cephadm
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: crash
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: dashboard
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: devicehealth
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: iostat
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: nfs
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: orchestrator
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO sso] Loading SSO DB version=1
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Starting
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: pg_autoscaler
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: progress
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [progress INFO root] Loading...
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7ff38940cc70>, <progress.module.GhostEvent object at 0x7ff38940c8e0>, <progress.module.GhostEvent object at 0x7ff38940c8b0>, <progress.module.GhostEvent object at 0x7ff38940c820>, <progress.module.GhostEvent object at 0x7ff38940c7f0>, <progress.module.GhostEvent object at 0x7ff38940c670>, <progress.module.GhostEvent object at 0x7ff38940c6d0>, <progress.module.GhostEvent object at 0x7ff385368040>, <progress.module.GhostEvent object at 0x7ff385368070>, <progress.module.GhostEvent object at 0x7ff3853680a0>, <progress.module.GhostEvent object at 0x7ff3853680d0>, <progress.module.GhostEvent object at 0x7ff385368100>] historic events
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [progress INFO root] Loaded OSDMap, ready.
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: prometheus
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:49.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO root] server_addr: :: server_port: 9283
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO root] Cache enabled
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO root] starting metric collection thread
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO root] Starting engine...
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:35:49] ENGINE Bus STARTING
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:35:49] ENGINE Bus STARTING
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: CherryPy Checker:
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: The Application mounted at '' has an empty config.
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 
Jan 22 04:35:49 np0005591760 python3[98556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 43df7a30-cf5f-5209-adfd-bf44298b19f2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] recovery thread starting
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] starting setup
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: rbd_support
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: restful
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: status
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: telemetry
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [restful INFO root] server_addr: :: server_port: 8003
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [restful WARNING root] server not running: no certificate configured
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"} v 0)
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.324799378 +0000 UTC m=+0.051981804 container create cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8 (image=quay.io/ceph/ceph:v19, name=stoic_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] PerfHandler: starting
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TaskHandler: starting
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"} v 0)
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.304737875 +0000 UTC m=+0.031920312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:35:49 np0005591760 systemd[1]: Started libpod-conmon-cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8.scope.
Jan 22 04:35:49 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:35:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e13b9248e8dd9ae8aaa61befed5fe9a88d860d8d8e5b5524fa7f0f25093f7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e13b9248e8dd9ae8aaa61befed5fe9a88d860d8d8e5b5524fa7f0f25093f7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: mgr load Constructed class from module: volumes
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.46406606 +0000 UTC m=+0.191248497 container init cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8 (image=quay.io/ceph/ceph:v19, name=stoic_bhabha, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.470481415 +0000 UTC m=+0.197663832 container start cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8 (image=quay.io/ceph/ceph:v19, name=stoic_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] setup complete
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.475189162 +0000 UTC m=+0.202371577 container attach cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8 (image=quay.io/ceph/ceph:v19, name=stoic_bhabha, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.476+0000 7ff36e893640 -1 client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.484+0000 7ff36a919640 -1 client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.484+0000 7ff36a919640 -1 client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.484+0000 7ff36a919640 -1 client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.484+0000 7ff36a919640 -1 client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:49.484+0000 7ff36a919640 -1 client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: client.0 error registering admin socket command: (17) File exists
Jan 22 04:35:49 np0005591760 systemd-logind[747]: New session 36 of user ceph-admin.
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:35:49] ENGINE Serving on http://:::9283
Jan 22 04:35:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:35:49] ENGINE Bus STARTED
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:35:49] ENGINE Serving on http://:::9283
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:35:49] ENGINE Bus STARTED
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [prometheus INFO root] Engine started.
Jan 22 04:35:49 np0005591760 systemd[1]: Started Session 36 of User ceph-admin.
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]: {
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "user_id": "openstack",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "display_name": "openstack",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "email": "",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "suspended": 0,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "max_buckets": 1000,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "subusers": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "keys": [
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        {
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:            "user": "openstack",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:            "access_key": "BN211IVBHGTT2PSN941Z",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:            "secret_key": "xUdhlm2IzdMvrrvnxiP1zr4ASp3i0BDDlWPMNJoN",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:            "active": true,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:            "create_date": "2026-01-22T09:35:49.627761Z"
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        }
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    ],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "swift_keys": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "caps": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "op_mask": "read, write, delete",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "default_placement": "",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "default_storage_class": "",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "placement_tags": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "bucket_quota": {
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "enabled": false,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "check_on_raw": false,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "max_size": -1,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "max_size_kb": 0,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "max_objects": -1
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    },
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "user_quota": {
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "enabled": false,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "check_on_raw": false,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "max_size": -1,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "max_size_kb": 0,
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:        "max_objects": -1
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    },
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "temp_url_keys": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "type": "rgw",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "mfa_ids": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "account_id": "",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "path": "/",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "create_date": "2026-01-22T09:35:49.627545Z",
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "tags": [],
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]:    "group_ids": []
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]: }
Jan 22 04:35:49 np0005591760 stoic_bhabha[98692]: 
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 22 04:35:49 np0005591760 systemd[1]: libpod-cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8.scope: Deactivated successfully.
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.655036953 +0000 UTC m=+0.382219368 container died cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8 (image=quay.io/ceph/ceph:v19, name=stoic_bhabha, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:35:49 np0005591760 systemd[1]: var-lib-containers-storage-overlay-24e13b9248e8dd9ae8aaa61befed5fe9a88d860d8d8e5b5524fa7f0f25093f7c-merged.mount: Deactivated successfully.
Jan 22 04:35:49 np0005591760 podman[98636]: 2026-01-22 09:35:49.676796417 +0000 UTC m=+0.403978832 container remove cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8 (image=quay.io/ceph/ceph:v19, name=stoic_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:35:49 np0005591760 systemd[1]: libpod-conmon-cbfb183f7e171b43923b4628806fd7055d84cb45aa94b6fae98b598fc9107ab8.scope: Deactivated successfully.
Jan 22 04:35:49 np0005591760 ceph-mgr[74522]: [dashboard INFO dashboard.module] Engine started.
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: Active manager daemon compute-0.rfmoog restarted
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: Activating manager daemon compute-0.rfmoog
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: Manager daemon compute-0.rfmoog is now available
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/mirror_snapshot_schedule"}]: dispatch
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:35:49 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rfmoog/trash_purge_schedule"}]: dispatch
Jan 22 04:35:50 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.rfmoog(active, since 1.04567s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v3: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:50 np0005591760 python3[98933]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:35:50 np0005591760 podman[98970]: 2026-01-22 09:35:50.332824813 +0000 UTC m=+0.066006515 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [dashboard INFO request] [192.168.122.100:45446] [GET] [200] [0.105s] [6.3K] [0ee8f79e-548c-4c2e-bb72-925be802f808] /
Jan 22 04:35:50 np0005591760 podman[98970]: 2026-01-22 09:35:50.425085304 +0000 UTC m=+0.158266986 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:35:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:35:50] ENGINE Bus STARTING
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:35:50] ENGINE Bus STARTING
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:35:50] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:35:50] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:35:50 np0005591760 python3[99037]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [dashboard INFO request] [192.168.122.100:45456] [GET] [200] [0.002s] [6.3K] [262e1b6b-8276-4902-8cfe-d84339494131] /
Jan 22 04:35:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:35:50] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:35:50] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:35:50] ENGINE Bus STARTED
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:35:50] ENGINE Bus STARTED
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: [cephadm INFO cherrypy.error] [22/Jan/2026:09:35:50] ENGINE Client ('192.168.122.100', 42564) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:35:50 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : [22/Jan/2026:09:35:50] ENGINE Client ('192.168.122.100', 42564) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:35:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:35:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:50 np0005591760 podman[99116]: 2026-01-22 09:35:50.816836091 +0000 UTC m=+0.045944452 container exec 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:50 np0005591760 podman[99116]: 2026-01-22 09:35:50.826963344 +0000 UTC m=+0.056071704 container exec_died 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 podman[99199]: 2026-01-22 09:35:51.135172238 +0000 UTC m=+0.045661688 container exec 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:51 np0005591760 podman[99199]: 2026-01-22 09:35:51.159055906 +0000 UTC m=+0.069545357 container exec_died 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:35:50] ENGINE Bus STARTING
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v4: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:51.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:51 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 04:35:51 np0005591760 podman[99267]: 2026-01-22 09:35:51.337400864 +0000 UTC m=+0.040580279 container exec 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:51 np0005591760 podman[99267]: 2026-01-22 09:35:51.462357068 +0000 UTC m=+0.165536451 container exec_died 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 04:35:51 np0005591760 podman[99325]: 2026-01-22 09:35:51.627241528 +0000 UTC m=+0.049048022 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:35:51 np0005591760 podman[99325]: 2026-01-22 09:35:51.639106027 +0000 UTC m=+0.060912491 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:35:51 np0005591760 podman[99377]: 2026-01-22 09:35:51.799869688 +0000 UTC m=+0.042215501 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Jan 22 04:35:51 np0005591760 podman[99377]: 2026-01-22 09:35:51.80798625 +0000 UTC m=+0.050332063 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, distribution-scope=public, io.buildah.version=1.28.2, release=1793, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 04:35:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:51 np0005591760 podman[99429]: 2026-01-22 09:35:51.974500504 +0000 UTC m=+0.040099493 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:52 np0005591760 podman[99429]: 2026-01-22 09:35:51.999956908 +0000 UTC m=+0.065555886 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.rfmoog(active, since 2s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:35:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:52.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:35:50] ENGINE Serving on http://192.168.122.100:8765
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:35:50] ENGINE Serving on https://192.168.122.100:7150
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:35:50] ENGINE Bus STARTED
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: [22/Jan/2026:09:35:50] ENGINE Client ('192.168.122.100', 42564) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 22 04:35:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:35:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:35:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:35:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:35:52 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:35:52 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v5: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:53.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.conf
Jan 22 04:35:53 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.rfmoog(active, since 4s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:53 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:35:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:54.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev ef51fefe-22cb-43f7-ae2e-f3ead40b5526 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [progress INFO root] fail: finished ev ef51fefe-22cb-43f7-ae2e-f3ead40b5526 (Updating ingress.nfs.cephfs deployment (+6 -> 6)): max() arg is an empty sequence
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event ef51fefe-22cb-43f7-ae2e-f3ead40b5526 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 0 seconds
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm ERROR cephadm.serve] Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [ERR] : Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v6: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 53f6e5cf-6842-46e7-a464-44cb42e21a24 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T09:35:54.671+0000 7ff390464640 -1 log_channel(cephadm) log [ERR] : Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: service_id: nfs.cephfs
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: service_name: ingress.nfs.cephfs
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: placement:
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  hosts:
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-0
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-1
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  - compute-2
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: spec:
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  backend_service: nfs.cephfs
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  enable_haproxy_protocol: true
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  first_virtual_router_id: 50
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  frontend_port: 2049
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  monitor_port: 9049
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  virtual_ip: 192.168.122.2/24
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ''')): max() arg is an empty sequence
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: Traceback (most recent call last):
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:    if self._apply_service(spec):
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:    daemon_spec = svc.prepare_create(daemon_spec)
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:    return self.haproxy_prepare_create(daemon_spec)
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]:    num_ranks = 1 + max(by_rank.keys())
Jan 22 04:35:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ValueError: max() arg is an empty sequence
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.pszzrs
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.pszzrs
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.pszzrs-rgw
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.pszzrs-rgw
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.pszzrs's ganesha conf is defaulting to empty
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: Updating compute-2:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: Updating compute-1:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: Updating compute-0:/var/lib/ceph/43df7a30-cf5f-5209-adfd-bf44298b19f2/config/ceph.client.admin.keyring
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.pszzrs's ganesha conf is defaulting to empty
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.pszzrs on compute-1
Jan 22 04:35:54 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.pszzrs on compute-1
Jan 22 04:35:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Creating key for client.nfs.cephfs.0.0.compute-1.pszzrs
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Rados config object exists: conf-nfs.cephfs
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Creating key for client.nfs.cephfs.0.0.compute-1.pszzrs-rgw
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.pszzrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Bind address in nfs.cephfs.0.0.compute-1.pszzrs's ganesha conf is defaulting to empty
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Deploying daemon nfs.cephfs.0.0.compute-1.pszzrs on compute-1
Jan 22 04:35:55 np0005591760 ceph-mon[74254]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:56 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.qniaxp
Jan 22 04:35:56 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.qniaxp
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 22 04:35:56 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 22 04:35:56 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:35:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 22 04:35:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:56.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v7: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 316 B/s wr, 11 op/s
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: Creating key for client.nfs.cephfs.1.0.compute-2.qniaxp
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:35:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 22 04:35:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:35:57] "GET /metrics HTTP/1.1" 200 46557 "" "Prometheus/2.51.0"
Jan 22 04:35:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:35:57] "GET /metrics HTTP/1.1" 200 46557 "" "Prometheus/2.51.0"
Jan 22 04:35:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:35:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:35:58.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v8: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 241 B/s wr, 8 op/s
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 13 completed events
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 22 04:35:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:35:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:35:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:35:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.qniaxp-rgw
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.qniaxp-rgw
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:35:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.qniaxp's ganesha conf is defaulting to empty
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.qniaxp's ganesha conf is defaulting to empty
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.qniaxp on compute-2
Jan 22 04:35:59 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.qniaxp on compute-2
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: Rados config object exists: conf-nfs.cephfs
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: Creating key for client.nfs.cephfs.1.0.compute-2.qniaxp-rgw
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.qniaxp-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: Bind address in nfs.cephfs.1.0.compute-2.qniaxp's ganesha conf is defaulting to empty
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: Deploying daemon nfs.cephfs.1.0.compute-2.qniaxp on compute-2
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:00 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ylzmiu
Jan 22 04:36:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ylzmiu
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 22 04:36:00 np0005591760 ceph-mgr[74522]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 22 04:36:00 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:36:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 22 04:36:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v9: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 879 B/s wr, 9 op/s
Jan 22 04:36:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: Creating key for client.nfs.cephfs.2.0.compute-0.ylzmiu
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 22 04:36:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 22 04:36:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:36:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:36:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v10: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 801 B/s wr, 8 op/s
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.ylzmiu's ganesha conf is defaulting to empty
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.ylzmiu's ganesha conf is defaulting to empty
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.ylzmiu on compute-0
Jan 22 04:36:03 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.ylzmiu on compute-0
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.173803716 +0000 UTC m=+0.040294298 container create 9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:04 np0005591760 systemd[1]: Started libpod-conmon-9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42.scope.
Jan 22 04:36:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.234368211 +0000 UTC m=+0.100858793 container init 9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_lamarr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.241678072 +0000 UTC m=+0.108168645 container start 9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.242885318 +0000 UTC m=+0.109375890 container attach 9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_lamarr, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:04 np0005591760 hungry_lamarr[100706]: 167 167
Jan 22 04:36:04 np0005591760 systemd[1]: libpod-9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42.scope: Deactivated successfully.
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.24783511 +0000 UTC m=+0.114325712 container died 9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.160159904 +0000 UTC m=+0.026650496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0236b705336dc0c707c4fcc0318b400b173c35e700a55bc38f612229fc9bd442-merged.mount: Deactivated successfully.
Jan 22 04:36:04 np0005591760 podman[100693]: 2026-01-22 09:36:04.272856142 +0000 UTC m=+0.139346715 container remove 9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:04 np0005591760 systemd[1]: libpod-conmon-9ff2dea8a353ba0284ed719780c6435e0b0d76b250eb19e37c97feadbfb33c42.scope: Deactivated successfully.
Jan 22 04:36:04 np0005591760 systemd[1]: Reloading.
Jan 22 04:36:04 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:36:04 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:36:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:36:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 04:36:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:04 np0005591760 systemd[1]: Reloading.
Jan 22 04:36:04 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:36:04 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:36:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v11: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 801 B/s wr, 8 op/s
Jan 22 04:36:04 np0005591760 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:05 np0005591760 podman[100837]: 2026-01-22 09:36:05.004597262 +0000 UTC m=+0.037943317 container create 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a3dfd55375c26d678c15bed6e34600103eea7ae9c9253e5dc4640215c40f62/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a3dfd55375c26d678c15bed6e34600103eea7ae9c9253e5dc4640215c40f62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a3dfd55375c26d678c15bed6e34600103eea7ae9c9253e5dc4640215c40f62/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a3dfd55375c26d678c15bed6e34600103eea7ae9c9253e5dc4640215c40f62/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ylzmiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 podman[100837]: 2026-01-22 09:36:05.057503838 +0000 UTC m=+0.090849913 container init 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:36:05 np0005591760 podman[100837]: 2026-01-22 09:36:05.061444397 +0000 UTC m=+0.094790452 container start 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:36:05 np0005591760 bash[100837]: 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0
Jan 22 04:36:05 np0005591760 podman[100837]: 2026-01-22 09:36:04.991697271 +0000 UTC m=+0.025043346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:05 np0005591760 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 53f6e5cf-6842-46e7-a464-44cb42e21a24 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 22 04:36:05 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 53f6e5cf-6842-46e7-a464-44cb42e21a24 (Updating nfs.cephfs deployment (+3 -> 3)) in 10 seconds
Jan 22 04:36:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: Rados config object exists: conf-nfs.cephfs
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: Creating key for client.nfs.cephfs.2.0.compute-0.ylzmiu-rgw
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: Bind address in nfs.cephfs.2.0.compute-0.ylzmiu's ganesha conf is defaulting to empty
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: Deploying daemon nfs.cephfs.2.0.compute-0.ylzmiu on compute-0
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:05 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.593666588 +0000 UTC m=+0.034754674 container create 64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:36:05 np0005591760 systemd[1]: Started libpod-conmon-64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72.scope.
Jan 22 04:36:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.659489016 +0000 UTC m=+0.100577102 container init 64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.664415333 +0000 UTC m=+0.105503419 container start 64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.665436418 +0000 UTC m=+0.106524504 container attach 64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Jan 22 04:36:05 np0005591760 eager_brahmagupta[100984]: 167 167
Jan 22 04:36:05 np0005591760 systemd[1]: libpod-64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72.scope: Deactivated successfully.
Jan 22 04:36:05 np0005591760 conmon[100984]: conmon 64f48de833463bd827c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72.scope/container/memory.events
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.669943615 +0000 UTC m=+0.111031721 container died 64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.579603595 +0000 UTC m=+0.020691702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-295b850be326e2cb1c00da5ad98daecd8f07576a19199a2b73ab2267678a3abb-merged.mount: Deactivated successfully.
Jan 22 04:36:05 np0005591760 podman[100971]: 2026-01-22 09:36:05.691361866 +0000 UTC m=+0.132449952 container remove 64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:05 np0005591760 systemd[1]: libpod-conmon-64f48de833463bd827c1704188e8062e5b6622c1f0dd1bb5ce1e13c23b76bc72.scope: Deactivated successfully.
Jan 22 04:36:05 np0005591760 podman[101006]: 2026-01-22 09:36:05.829586684 +0000 UTC m=+0.038146840 container create 7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_leavitt, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 04:36:05 np0005591760 systemd[1]: Started libpod-conmon-7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe.scope.
Jan 22 04:36:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47a4985df5e2f7f3a739f68ff00f5ab58d888c39528399ac19083ecfa4729b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47a4985df5e2f7f3a739f68ff00f5ab58d888c39528399ac19083ecfa4729b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47a4985df5e2f7f3a739f68ff00f5ab58d888c39528399ac19083ecfa4729b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47a4985df5e2f7f3a739f68ff00f5ab58d888c39528399ac19083ecfa4729b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47a4985df5e2f7f3a739f68ff00f5ab58d888c39528399ac19083ecfa4729b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:05 np0005591760 podman[101006]: 2026-01-22 09:36:05.814671105 +0000 UTC m=+0.023231280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:05 np0005591760 podman[101006]: 2026-01-22 09:36:05.912349149 +0000 UTC m=+0.120909314 container init 7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:36:05 np0005591760 podman[101006]: 2026-01-22 09:36:05.91935137 +0000 UTC m=+0.127911525 container start 7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:05 np0005591760 podman[101006]: 2026-01-22 09:36:05.920558377 +0000 UTC m=+0.129118531 container attach 7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:36:06 np0005591760 pedantic_leavitt[101019]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:36:06 np0005591760 pedantic_leavitt[101019]: --> All data devices are unavailable
Jan 22 04:36:06 np0005591760 systemd[1]: libpod-7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe.scope: Deactivated successfully.
Jan 22 04:36:06 np0005591760 podman[101006]: 2026-01-22 09:36:06.22380869 +0000 UTC m=+0.432368846 container died 7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_leavitt, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:36:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c47a4985df5e2f7f3a739f68ff00f5ab58d888c39528399ac19083ecfa4729b0-merged.mount: Deactivated successfully.
Jan 22 04:36:06 np0005591760 podman[101006]: 2026-01-22 09:36:06.250460729 +0000 UTC m=+0.459020885 container remove 7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 22 04:36:06 np0005591760 systemd[1]: libpod-conmon-7b6ddc26e9f2ad548c01edca31f0cd5782e6155a8d844b62763e3d634d4ba3fe.scope: Deactivated successfully.
Jan 22 04:36:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 22 04:36:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:36:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v12: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 KiB/s wr, 11 op/s
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.764575022 +0000 UTC m=+0.035224972 container create 620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:36:06 np0005591760 systemd[1]: Started libpod-conmon-620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3.scope.
Jan 22 04:36:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.831896474 +0000 UTC m=+0.102546435 container init 620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dewdney, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.838332458 +0000 UTC m=+0.108982409 container start 620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.840278567 +0000 UTC m=+0.110928518 container attach 620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dewdney, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:06 np0005591760 unruffled_dewdney[101152]: 167 167
Jan 22 04:36:06 np0005591760 systemd[1]: libpod-620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3.scope: Deactivated successfully.
Jan 22 04:36:06 np0005591760 conmon[101152]: conmon 620eff0b94b9b998b030 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3.scope/container/memory.events
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.845243848 +0000 UTC m=+0.115893979 container died 620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.751334928 +0000 UTC m=+0.021984900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6ea2b9a32237290476561cd7bfc31541fa3d4de7ea3cdd02b577b32817ced15c-merged.mount: Deactivated successfully.
Jan 22 04:36:06 np0005591760 podman[101139]: 2026-01-22 09:36:06.871412094 +0000 UTC m=+0.142062045 container remove 620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:36:06 np0005591760 systemd[1]: libpod-conmon-620eff0b94b9b998b03017209b1f387fcd3e42532fec2122106ffd7b4e912ac3.scope: Deactivated successfully.
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.016121929 +0000 UTC m=+0.033825563 container create d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 04:36:07 np0005591760 systemd[1]: Started libpod-conmon-d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf.scope.
Jan 22 04:36:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3737363210b644f7bf6a62072f80c5eefaffba5b15d67ca24fabb67d9f37a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3737363210b644f7bf6a62072f80c5eefaffba5b15d67ca24fabb67d9f37a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3737363210b644f7bf6a62072f80c5eefaffba5b15d67ca24fabb67d9f37a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3737363210b644f7bf6a62072f80c5eefaffba5b15d67ca24fabb67d9f37a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.08708077 +0000 UTC m=+0.104784403 container init d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.092941418 +0000 UTC m=+0.110645052 container start d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_agnesi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.097708586 +0000 UTC m=+0.115412220 container attach d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.002206714 +0000 UTC m=+0.019910368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:36:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]: {
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:    "0": [
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:        {
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "devices": [
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "/dev/loop3"
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            ],
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "lv_name": "ceph_lv0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "lv_size": "21470642176",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "name": "ceph_lv0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "tags": {
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.cluster_name": "ceph",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.crush_device_class": "",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.encrypted": "0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.osd_id": "0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.type": "block",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.vdo": "0",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:                "ceph.with_tpm": "0"
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            },
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "type": "block",
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:            "vg_name": "ceph_vg0"
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:        }
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]:    ]
Jan 22 04:36:07 np0005591760 priceless_agnesi[101187]: }
Jan 22 04:36:07 np0005591760 systemd[1]: libpod-d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf.scope: Deactivated successfully.
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.359744138 +0000 UTC m=+0.377447772 container died d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:36:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0d3737363210b644f7bf6a62072f80c5eefaffba5b15d67ca24fabb67d9f37a8-merged.mount: Deactivated successfully.
Jan 22 04:36:07 np0005591760 podman[101174]: 2026-01-22 09:36:07.382870729 +0000 UTC m=+0.400574363 container remove d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_agnesi, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:36:07 np0005591760 systemd[1]: libpod-conmon-d76936160899a22c35a5469b84cbe7338cd50cab00835e6730f098d6c43137cf.scope: Deactivated successfully.
Jan 22 04:36:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:07] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:07] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.89619417 +0000 UTC m=+0.038663625 container create 41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:07 np0005591760 systemd[1]: Started libpod-conmon-41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506.scope.
Jan 22 04:36:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.955380024 +0000 UTC m=+0.097849479 container init 41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.96011409 +0000 UTC m=+0.102583545 container start 41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lehmann, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.961437565 +0000 UTC m=+0.103907020 container attach 41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lehmann, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 22 04:36:07 np0005591760 recursing_lehmann[101300]: 167 167
Jan 22 04:36:07 np0005591760 systemd[1]: libpod-41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506.scope: Deactivated successfully.
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.96555661 +0000 UTC m=+0.108026065 container died 41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.880395986 +0000 UTC m=+0.022865461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9c09466492a1dfde64a3ad1b0e0387d02f435aac4d9452e3da1ea02d90a682bf-merged.mount: Deactivated successfully.
Jan 22 04:36:07 np0005591760 podman[101287]: 2026-01-22 09:36:07.993613287 +0000 UTC m=+0.136082742 container remove 41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lehmann, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:08 np0005591760 systemd[1]: libpod-conmon-41d694019f2bd6a285de736f95ee48b31dcb1ff6d89867303acf981040b75506.scope: Deactivated successfully.
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.128496576 +0000 UTC m=+0.041214755 container create a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:36:08 np0005591760 systemd[1]: Started libpod-conmon-a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8.scope.
Jan 22 04:36:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8afbf1d60acee4b1002d461214654e5414c6e8a01940d4997a9d554c4cabec8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8afbf1d60acee4b1002d461214654e5414c6e8a01940d4997a9d554c4cabec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8afbf1d60acee4b1002d461214654e5414c6e8a01940d4997a9d554c4cabec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8afbf1d60acee4b1002d461214654e5414c6e8a01940d4997a9d554c4cabec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.113048582 +0000 UTC m=+0.025766770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.209623305 +0000 UTC m=+0.122341475 container init a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_heisenberg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.214937485 +0000 UTC m=+0.127655654 container start a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_heisenberg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.218707963 +0000 UTC m=+0.131426133 container attach a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_heisenberg, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:36:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v13: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Jan 22 04:36:08 np0005591760 lvm[101410]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:36:08 np0005591760 lvm[101410]: VG ceph_vg0 finished
Jan 22 04:36:08 np0005591760 gifted_heisenberg[101334]: {}
Jan 22 04:36:08 np0005591760 systemd[1]: libpod-a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8.scope: Deactivated successfully.
Jan 22 04:36:08 np0005591760 systemd[1]: libpod-a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8.scope: Consumed 1.020s CPU time.
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.833529071 +0000 UTC m=+0.746247239 container died a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:36:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f8afbf1d60acee4b1002d461214654e5414c6e8a01940d4997a9d554c4cabec8-merged.mount: Deactivated successfully.
Jan 22 04:36:08 np0005591760 podman[101321]: 2026-01-22 09:36:08.867898849 +0000 UTC m=+0.780617018 container remove a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:36:08 np0005591760 systemd[1]: libpod-conmon-a8c7a1dc8ae5efd6f73111b0c4c237ce850e5753c3b21637faa6ab5d16b078a8.scope: Deactivated successfully.
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 22 04:36:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:09 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 14 completed events
Jan 22 04:36:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:36:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:09 np0005591760 podman[101581]: 2026-01-22 09:36:09.703420478 +0000 UTC m=+0.050928567 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:09 np0005591760 podman[101581]: 2026-01-22 09:36:09.793421389 +0000 UTC m=+0.140929458 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:36:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:10 np0005591760 podman[101677]: 2026-01-22 09:36:10.235117039 +0000 UTC m=+0.060723315 container exec 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:10 np0005591760 podman[101677]: 2026-01-22 09:36:10.245957597 +0000 UTC m=+0.071563862 container exec_died 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:10 np0005591760 podman[101761]: 2026-01-22 09:36:10.592095959 +0000 UTC m=+0.046322623 container exec 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:10 np0005591760 podman[101761]: 2026-01-22 09:36:10.62219181 +0000 UTC m=+0.076418454 container exec_died 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v14: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 2.4 KiB/s wr, 8 op/s
Jan 22 04:36:10 np0005591760 podman[101819]: 2026-01-22 09:36:10.78296503 +0000 UTC m=+0.040619641 container exec 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:10 np0005591760 podman[101819]: 2026-01-22 09:36:10.921149482 +0000 UTC m=+0.178804095 container exec_died 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:11 np0005591760 podman[101878]: 2026-01-22 09:36:11.123949285 +0000 UTC m=+0.045731781 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:36:11 np0005591760 podman[101878]: 2026-01-22 09:36:11.161989985 +0000 UTC m=+0.083772480 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:36:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:11 np0005591760 podman[101932]: 2026-01-22 09:36:11.317514148 +0000 UTC m=+0.039172484 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20)
Jan 22 04:36:11 np0005591760 podman[101932]: 2026-01-22 09:36:11.324946751 +0000 UTC m=+0.046605067 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20)
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 podman[101983]: 2026-01-22 09:36:11.499240531 +0000 UTC m=+0.040140880 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:11 np0005591760 podman[101983]: 2026-01-22 09:36:11.529938207 +0000 UTC m=+0.070838555 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:11 np0005591760 podman[102029]: 2026-01-22 09:36:11.674590302 +0000 UTC m=+0.042420507 container exec 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:11 np0005591760 podman[102029]: 2026-01-22 09:36:11.688129828 +0000 UTC m=+0.055960034 container exec_died 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:36:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v15: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 2.0 KiB/s wr, 7 op/s
Jan 22 04:36:11 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev ccd5c9ee-2138-4754-96ca-b13b0a26b891 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 22 04:36:11 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.zxzfsl on compute-1
Jan 22 04:36:11 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.zxzfsl on compute-1
Jan 22 04:36:12 np0005591760 systemd-logind[747]: New session 37 of user zuul.
Jan 22 04:36:12 np0005591760 systemd[1]: Started Session 37 of User zuul.
Jan 22 04:36:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:12 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:36:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Jan 22 04:36:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 04:36:12 np0005591760 python3.9[102209]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:36:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:13 np0005591760 ceph-mon[74254]: Deploying daemon haproxy.nfs.cephfs.compute-1.zxzfsl on compute-1
Jan 22 04:36:13 np0005591760 ceph-mon[74254]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Jan 22 04:36:13 np0005591760 ceph-mon[74254]: Cluster is now healthy
Jan 22 04:36:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v16: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 2.0 KiB/s wr, 7 op/s
Jan 22 04:36:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:14.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:14 np0005591760 python3.9[102423]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:36:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:15.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:36:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:36:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:15 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.dnpemq on compute-0
Jan 22 04:36:15 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.dnpemq on compute-0
Jan 22 04:36:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v17: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 2.0 KiB/s wr, 7 op/s
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.025647838 +0000 UTC m=+0.036620894 container create 7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913 (image=quay.io/ceph/haproxy:2.3, name=gracious_noyce)
Jan 22 04:36:16 np0005591760 systemd[1]: Started libpod-conmon-7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913.scope.
Jan 22 04:36:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.009156286 +0000 UTC m=+0.020129362 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.105330285 +0000 UTC m=+0.116303362 container init 7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913 (image=quay.io/ceph/haproxy:2.3, name=gracious_noyce)
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.111507892 +0000 UTC m=+0.122480939 container start 7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913 (image=quay.io/ceph/haproxy:2.3, name=gracious_noyce)
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.113838867 +0000 UTC m=+0.124811943 container attach 7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913 (image=quay.io/ceph/haproxy:2.3, name=gracious_noyce)
Jan 22 04:36:16 np0005591760 gracious_noyce[102535]: 0 0
Jan 22 04:36:16 np0005591760 systemd[1]: libpod-7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913.scope: Deactivated successfully.
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.116197835 +0000 UTC m=+0.127170891 container died 7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913 (image=quay.io/ceph/haproxy:2.3, name=gracious_noyce)
Jan 22 04:36:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay-75fc208df3790b3e976222387eec88f72a156825e899bc4065394405719b1691-merged.mount: Deactivated successfully.
Jan 22 04:36:16 np0005591760 podman[102519]: 2026-01-22 09:36:16.148624009 +0000 UTC m=+0.159597065 container remove 7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913 (image=quay.io/ceph/haproxy:2.3, name=gracious_noyce)
Jan 22 04:36:16 np0005591760 systemd[1]: libpod-conmon-7457ad322d19c384edf0af6a4a448ffda7bacd67e3434d37268ccc1067143913.scope: Deactivated successfully.
Jan 22 04:36:16 np0005591760 systemd[1]: Reloading.
Jan 22 04:36:16 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:36:16 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:36:16 np0005591760 systemd[1]: Reloading.
Jan 22 04:36:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:16.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:16 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:36:16 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:36:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:16 np0005591760 ceph-mon[74254]: Deploying daemon haproxy.nfs.cephfs.compute-0.dnpemq on compute-0
Jan 22 04:36:16 np0005591760 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.dnpemq for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:16 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:16 np0005591760 podman[102667]: 2026-01-22 09:36:16.944521222 +0000 UTC m=+0.050663738 container create b32a6f1fd65665d58e5c7a9199b1f0a68d299193606f13ba045ddb63f2e02c6e (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq)
Jan 22 04:36:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32611ce47ca77998f4ab5046453ec1a232f141bb94b2a767c3507f56fccc8ab6/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:16 np0005591760 podman[102667]: 2026-01-22 09:36:16.995210178 +0000 UTC m=+0.101352704 container init b32a6f1fd65665d58e5c7a9199b1f0a68d299193606f13ba045ddb63f2e02c6e (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq)
Jan 22 04:36:17 np0005591760 podman[102667]: 2026-01-22 09:36:17.000908441 +0000 UTC m=+0.107050947 container start b32a6f1fd65665d58e5c7a9199b1f0a68d299193606f13ba045ddb63f2e02c6e (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq)
Jan 22 04:36:17 np0005591760 bash[102667]: b32a6f1fd65665d58e5c7a9199b1f0a68d299193606f13ba045ddb63f2e02c6e
Jan 22 04:36:17 np0005591760 podman[102667]: 2026-01-22 09:36:16.927070091 +0000 UTC m=+0.033212618 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 04:36:17 np0005591760 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.dnpemq for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [NOTICE] 021/093617 (2) : New worker #1 (4) forked
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:17 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.uczfqf on compute-2
Jan 22 04:36:17 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.uczfqf on compute-2
Jan 22 04:36:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:17] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:17] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v18: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1014 B/s wr, 4 op/s
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.bcudmx on compute-1
Jan 22 04:36:18 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.bcudmx on compute-1
Jan 22 04:36:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:18 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:18.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: Deploying daemon haproxy.nfs.cephfs.compute-2.uczfqf on compute-2
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:18 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8001eb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:36:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:19 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8001eb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: Deploying daemon keepalived.nfs.cephfs.compute-1.bcudmx on compute-1
Jan 22 04:36:19 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v19: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1014 B/s wr, 4 op/s
Jan 22 04:36:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:20 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:20.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:20 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:21 np0005591760 systemd[1]: session-37.scope: Deactivated successfully.
Jan 22 04:36:21 np0005591760 systemd[1]: session-37.scope: Consumed 6.993s CPU time.
Jan 22 04:36:21 np0005591760 systemd-logind[747]: Session 37 logged out. Waiting for processes to exit.
Jan 22 04:36:21 np0005591760 systemd-logind[747]: Removed session 37.
Jan 22 04:36:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:21 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v20: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Jan 22 04:36:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:22 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:36:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:36:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.qtywyd on compute-0
Jan 22 04:36:22 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.qtywyd on compute-0
Jan 22 04:36:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:22.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:22 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:22 np0005591760 podman[102820]: 2026-01-22 09:36:22.963985055 +0000 UTC m=+0.032889388 container create c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae (image=quay.io/ceph/keepalived:2.2.4, name=distracted_wozniak, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, version=2.2.4, name=keepalived, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Jan 22 04:36:22 np0005591760 systemd[1]: Started libpod-conmon-c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae.scope.
Jan 22 04:36:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:23 np0005591760 podman[102820]: 2026-01-22 09:36:23.032830831 +0000 UTC m=+0.101735154 container init c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae (image=quay.io/ceph/keepalived:2.2.4, name=distracted_wozniak, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived)
Jan 22 04:36:23 np0005591760 podman[102820]: 2026-01-22 09:36:23.038690107 +0000 UTC m=+0.107594431 container start c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae (image=quay.io/ceph/keepalived:2.2.4, name=distracted_wozniak, name=keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 04:36:23 np0005591760 podman[102820]: 2026-01-22 09:36:23.04002751 +0000 UTC m=+0.108931833 container attach c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae (image=quay.io/ceph/keepalived:2.2.4, name=distracted_wozniak, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, name=keepalived, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 04:36:23 np0005591760 distracted_wozniak[102833]: 0 0
Jan 22 04:36:23 np0005591760 systemd[1]: libpod-c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae.scope: Deactivated successfully.
Jan 22 04:36:23 np0005591760 podman[102820]: 2026-01-22 09:36:23.044256282 +0000 UTC m=+0.113160605 container died c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae (image=quay.io/ceph/keepalived:2.2.4, name=distracted_wozniak, release=1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Jan 22 04:36:23 np0005591760 podman[102820]: 2026-01-22 09:36:22.95053114 +0000 UTC m=+0.019435483 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 04:36:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-84c9e822024c8238c0a48fd06dd817424aff44fc0e33c864929d6ef888485928-merged.mount: Deactivated successfully.
Jan 22 04:36:23 np0005591760 podman[102820]: 2026-01-22 09:36:23.067424972 +0000 UTC m=+0.136329295 container remove c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae (image=quay.io/ceph/keepalived:2.2.4, name=distracted_wozniak, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, description=keepalived for Ceph)
Jan 22 04:36:23 np0005591760 systemd[1]: libpod-conmon-c35d2ab9fa8b310c199aa0e60424e091c6a79024690b590c790883d8cec492ae.scope: Deactivated successfully.
Jan 22 04:36:23 np0005591760 systemd[1]: Reloading.
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:23 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:36:23 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:36:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:23 np0005591760 systemd[1]: Reloading.
Jan 22 04:36:23 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:36:23 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: Deploying daemon keepalived.nfs.cephfs.compute-0.qtywyd on compute-0
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:23 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:23 np0005591760 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.qtywyd for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v21: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:23 np0005591760 podman[102968]: 2026-01-22 09:36:23.821662855 +0000 UTC m=+0.033394801 container create 9d573b276e63193b14c97d21b6d0a0125a0b7cd0fefa69b4c9c9fa0b568f2284 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Jan 22 04:36:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5e896a3830640cf1e2cbdfbea6628b2741bacde1d590d047443c34c9daab80/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:23 np0005591760 podman[102968]: 2026-01-22 09:36:23.867951534 +0000 UTC m=+0.079683489 container init 9d573b276e63193b14c97d21b6d0a0125a0b7cd0fefa69b4c9c9fa0b568f2284 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd, io.buildah.version=1.28.2, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 04:36:23 np0005591760 podman[102968]: 2026-01-22 09:36:23.872921294 +0000 UTC m=+0.084653239 container start 9d573b276e63193b14c97d21b6d0a0125a0b7cd0fefa69b4c9c9fa0b568f2284 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, name=keepalived, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Jan 22 04:36:23 np0005591760 bash[102968]: 9d573b276e63193b14c97d21b6d0a0125a0b7cd0fefa69b4c9c9fa0b568f2284
Jan 22 04:36:23 np0005591760 podman[102968]: 2026-01-22 09:36:23.80912135 +0000 UTC m=+0.020853316 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 04:36:23 np0005591760 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.qtywyd for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Starting VRRP child process, pid=4
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: Startup complete
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:36:23 2026: (VI_0) Entering BACKUP STATE
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: (VI_0) Entering BACKUP STATE (init)
Jan 22 04:36:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:23 2026: VRRP_Script(check_backend) succeeded
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.bromuh on compute-2
Jan 22 04:36:23 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.bromuh on compute-2
Jan 22 04:36:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:24 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu[97867]: Thu Jan 22 09:36:24 2026: (VI_0) Entering MASTER STATE
Jan 22 04:36:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:24.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:24 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 22 04:36:24 np0005591760 ceph-mon[74254]: Deploying daemon keepalived.nfs.cephfs.compute-2.bromuh on compute-2
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:25 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev ccd5c9ee-2138-4754-96ca-b13b0a26b891 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 22 04:36:25 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event ccd5c9ee-2138-4754-96ca-b13b0a26b891 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 13 seconds
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:36:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:25.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:25 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.585571656 +0000 UTC m=+0.030728474 container create 1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:25 np0005591760 systemd[1]: Started libpod-conmon-1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2.scope.
Jan 22 04:36:25 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.635979641 +0000 UTC m=+0.081136478 container init 1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.642976954 +0000 UTC m=+0.088133771 container start 1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.6444072 +0000 UTC m=+0.089564018 container attach 1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:36:25 np0005591760 romantic_sammet[103083]: 167 167
Jan 22 04:36:25 np0005591760 systemd[1]: libpod-1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2.scope: Deactivated successfully.
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.650880614 +0000 UTC m=+0.096037431 container died 1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:36:25 np0005591760 systemd[1]: var-lib-containers-storage-overlay-af9679106855089f37de1aa0a72a51a52925b6f375a20371dc566bf776e2d773-merged.mount: Deactivated successfully.
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.572988183 +0000 UTC m=+0.018145020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:25 np0005591760 podman[103070]: 2026-01-22 09:36:25.671631907 +0000 UTC m=+0.116788714 container remove 1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:25 np0005591760 systemd[1]: libpod-conmon-1e0fe4252a354404b9cb364f76626e11e3e10193f69d5bbc9780e2f8e348fda2.scope: Deactivated successfully.
Jan 22 04:36:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v22: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:25 np0005591760 podman[103104]: 2026-01-22 09:36:25.808694765 +0000 UTC m=+0.036931549 container create 1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:36:25 np0005591760 systemd[1]: Started libpod-conmon-1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d.scope.
Jan 22 04:36:25 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e52f9fcb5c5512280797e15a59fa15fae534cd7f4b8bfd12bd987e4d2a96092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e52f9fcb5c5512280797e15a59fa15fae534cd7f4b8bfd12bd987e4d2a96092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e52f9fcb5c5512280797e15a59fa15fae534cd7f4b8bfd12bd987e4d2a96092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e52f9fcb5c5512280797e15a59fa15fae534cd7f4b8bfd12bd987e4d2a96092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e52f9fcb5c5512280797e15a59fa15fae534cd7f4b8bfd12bd987e4d2a96092/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:25 np0005591760 podman[103104]: 2026-01-22 09:36:25.874998369 +0000 UTC m=+0.103235153 container init 1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:25 np0005591760 podman[103104]: 2026-01-22 09:36:25.880622662 +0000 UTC m=+0.108859447 container start 1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:25 np0005591760 podman[103104]: 2026-01-22 09:36:25.881989039 +0000 UTC m=+0.110225823 container attach 1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mclaren, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:25 np0005591760 podman[103104]: 2026-01-22 09:36:25.796696574 +0000 UTC m=+0.024933359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:36:26 np0005591760 frosty_mclaren[103117]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:36:26 np0005591760 frosty_mclaren[103117]: --> All data devices are unavailable
Jan 22 04:36:26 np0005591760 systemd[1]: libpod-1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d.scope: Deactivated successfully.
Jan 22 04:36:26 np0005591760 podman[103133]: 2026-01-22 09:36:26.218793234 +0000 UTC m=+0.021778212 container died 1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mclaren, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6e52f9fcb5c5512280797e15a59fa15fae534cd7f4b8bfd12bd987e4d2a96092-merged.mount: Deactivated successfully.
Jan 22 04:36:26 np0005591760 podman[103133]: 2026-01-22 09:36:26.246577657 +0000 UTC m=+0.049562634 container remove 1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mclaren, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:26 np0005591760 systemd[1]: libpod-conmon-1fdd85c45fb7fb7369e13361843c23c14bd25ebec3c0b4edb8fd6c4ff735e18d.scope: Deactivated successfully.
Jan 22 04:36:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:26 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:26.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.735487851 +0000 UTC m=+0.033633370 container create 74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:26 np0005591760 systemd[1]: Started libpod-conmon-74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077.scope.
Jan 22 04:36:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.793425692 +0000 UTC m=+0.091571212 container init 74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.79837264 +0000 UTC m=+0.096518149 container start 74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.799538697 +0000 UTC m=+0.097684207 container attach 74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:26 np0005591760 cranky_babbage[103240]: 167 167
Jan 22 04:36:26 np0005591760 systemd[1]: libpod-74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077.scope: Deactivated successfully.
Jan 22 04:36:26 np0005591760 conmon[103240]: conmon 74a4336249843cfaafd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077.scope/container/memory.events
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.803394628 +0000 UTC m=+0.101540137 container died 74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-eb9d9b1243888c04040ea83a934a64f6caa8a687312a2006dae37cda42b4ba42-merged.mount: Deactivated successfully.
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.722255925 +0000 UTC m=+0.020401454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:26 np0005591760 podman[103227]: 2026-01-22 09:36:26.824963031 +0000 UTC m=+0.123108540 container remove 74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:26 np0005591760 systemd[1]: libpod-conmon-74a4336249843cfaafd208363cfb23678e200e958d5bb7e2e4d89374418ac077.scope: Deactivated successfully.
Jan 22 04:36:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:26 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80045f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:26 np0005591760 podman[103262]: 2026-01-22 09:36:26.965288641 +0000 UTC m=+0.039193013 container create 9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:36:26 np0005591760 systemd[1]: Started libpod-conmon-9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a.scope.
Jan 22 04:36:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaeac6ee0e3ffb81e171d16be831f7cf30e5e7a28e47241b01363bf3caa8686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaeac6ee0e3ffb81e171d16be831f7cf30e5e7a28e47241b01363bf3caa8686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaeac6ee0e3ffb81e171d16be831f7cf30e5e7a28e47241b01363bf3caa8686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaeac6ee0e3ffb81e171d16be831f7cf30e5e7a28e47241b01363bf3caa8686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:27 np0005591760 podman[103262]: 2026-01-22 09:36:27.03201452 +0000 UTC m=+0.105918913 container init 9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:27 np0005591760 podman[103262]: 2026-01-22 09:36:27.037062627 +0000 UTC m=+0.110967000 container start 9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:27 np0005591760 podman[103262]: 2026-01-22 09:36:27.03849582 +0000 UTC m=+0.112400193 container attach 9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:36:27 np0005591760 podman[103262]: 2026-01-22 09:36:26.949718185 +0000 UTC m=+0.023622568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.130194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074587130252, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1489, "num_deletes": 252, "total_data_size": 4275748, "memory_usage": 4491432, "flush_reason": "Manual Compaction"}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074587141751, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3999227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5946, "largest_seqno": 7433, "table_properties": {"data_size": 3992612, "index_size": 3429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17108, "raw_average_key_size": 20, "raw_value_size": 3977702, "raw_average_value_size": 4850, "num_data_blocks": 154, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074543, "oldest_key_time": 1769074543, "file_creation_time": 1769074587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11600 microseconds, and 9741 cpu microseconds.
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.141802) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3999227 bytes OK
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.141818) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.142175) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.142190) EVENT_LOG_v1 {"time_micros": 1769074587142185, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.142209) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4268577, prev total WAL file size 4268577, number of live WAL files 2.
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.143144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3905KB)], [20(10081KB)]
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074587143205, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 14323057, "oldest_snapshot_seqno": -1}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2628 keys, 12957619 bytes, temperature: kUnknown
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074587172360, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12957619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12936030, "index_size": 13928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 66559, "raw_average_key_size": 25, "raw_value_size": 12883343, "raw_average_value_size": 4902, "num_data_blocks": 617, "num_entries": 2628, "num_filter_entries": 2628, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769074587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.172656) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12957619 bytes
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.173117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 489.3 rd, 442.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.8 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 3169, records dropped: 541 output_compression: NoCompression
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.173140) EVENT_LOG_v1 {"time_micros": 1769074587173130, "job": 6, "event": "compaction_finished", "compaction_time_micros": 29271, "compaction_time_cpu_micros": 24581, "output_level": 6, "num_output_files": 1, "total_output_size": 12957619, "num_input_records": 3169, "num_output_records": 2628, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074587173867, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074587175801, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.143068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.175888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.175894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.175896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.175897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:36:27 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:36:27.175898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]: {
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:    "0": [
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:        {
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "devices": [
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "/dev/loop3"
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            ],
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "lv_name": "ceph_lv0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "lv_size": "21470642176",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "name": "ceph_lv0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "tags": {
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.cluster_name": "ceph",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.crush_device_class": "",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.encrypted": "0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.osd_id": "0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.type": "block",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.vdo": "0",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:                "ceph.with_tpm": "0"
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            },
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "type": "block",
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:            "vg_name": "ceph_vg0"
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:        }
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]:    ]
Jan 22 04:36:27 np0005591760 keen_bhaskara[103275]: }
Jan 22 04:36:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:27 np0005591760 systemd[1]: libpod-9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a.scope: Deactivated successfully.
Jan 22 04:36:27 np0005591760 podman[103284]: 2026-01-22 09:36:27.346175165 +0000 UTC m=+0.022105307 container died 9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:36:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4aaeac6ee0e3ffb81e171d16be831f7cf30e5e7a28e47241b01363bf3caa8686-merged.mount: Deactivated successfully.
Jan 22 04:36:27 np0005591760 podman[103284]: 2026-01-22 09:36:27.372387504 +0000 UTC m=+0.048317626 container remove 9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:36:27 np0005591760 systemd[1]: libpod-conmon-9fce6c9733c459cf8cc34d2338bc9605f00f3c3ed9c95f8bc33fdaabc2e2d23a.scope: Deactivated successfully.
Jan 22 04:36:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-nfs-cephfs-compute-0-qtywyd[102980]: Thu Jan 22 09:36:27 2026: (VI_0) Entering MASTER STATE
Jan 22 04:36:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:27 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80045f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:27] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 22 04:36:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:27] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 22 04:36:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v23: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.859934187 +0000 UTC m=+0.038275433 container create 51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:36:27 np0005591760 systemd[1]: Started libpod-conmon-51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566.scope.
Jan 22 04:36:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.926919526 +0000 UTC m=+0.105260762 container init 51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gauss, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.932258893 +0000 UTC m=+0.110600129 container start 51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.933508619 +0000 UTC m=+0.111849855 container attach 51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 04:36:27 np0005591760 practical_gauss[103391]: 167 167
Jan 22 04:36:27 np0005591760 systemd[1]: libpod-51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566.scope: Deactivated successfully.
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.938349475 +0000 UTC m=+0.116690711 container died 51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gauss, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.844038157 +0000 UTC m=+0.022379404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-25efa821e5e3deb016106f6359a50dd3397b0920c52c79f4baa7174f96133893-merged.mount: Deactivated successfully.
Jan 22 04:36:27 np0005591760 podman[103378]: 2026-01-22 09:36:27.958352918 +0000 UTC m=+0.136694154 container remove 51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_gauss, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:36:27 np0005591760 systemd[1]: libpod-conmon-51d6fed961934a2312cfd590b492e9ca53b229e589dcdbeac8f02599bdc58566.scope: Deactivated successfully.
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.09457436 +0000 UTC m=+0.037434768 container create c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 04:36:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:28 np0005591760 systemd[1]: Started libpod-conmon-c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0.scope.
Jan 22 04:36:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2c8dc07c8a430d2abd673257c178e8a22e8269a24245dae26ef05494ae07cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2c8dc07c8a430d2abd673257c178e8a22e8269a24245dae26ef05494ae07cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2c8dc07c8a430d2abd673257c178e8a22e8269a24245dae26ef05494ae07cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2c8dc07c8a430d2abd673257c178e8a22e8269a24245dae26ef05494ae07cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.158386266 +0000 UTC m=+0.101246693 container init c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.163434584 +0000 UTC m=+0.106294991 container start c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.164712132 +0000 UTC m=+0.107572539 container attach c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.078588391 +0000 UTC m=+0.021448818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:28 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:28.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:28 np0005591760 lvm[103503]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:36:28 np0005591760 lvm[103503]: VG ceph_vg0 finished
Jan 22 04:36:28 np0005591760 sharp_nightingale[103427]: {}
Jan 22 04:36:28 np0005591760 lvm[103506]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:36:28 np0005591760 lvm[103506]: VG ceph_vg0 finished
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.781207095 +0000 UTC m=+0.724067502 container died c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:36:28 np0005591760 systemd[1]: libpod-c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0.scope: Deactivated successfully.
Jan 22 04:36:28 np0005591760 systemd[1]: libpod-c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0.scope: Consumed 1.013s CPU time.
Jan 22 04:36:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3e2c8dc07c8a430d2abd673257c178e8a22e8269a24245dae26ef05494ae07cd-merged.mount: Deactivated successfully.
Jan 22 04:36:28 np0005591760 podman[103414]: 2026-01-22 09:36:28.817192179 +0000 UTC m=+0.760052586 container remove c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:36:28 np0005591760 systemd[1]: libpod-conmon-c0f04d80ce6e653aa0433e08f1b756a85a03598fefb3582f41b5c0a80fa93ef0.scope: Deactivated successfully.
Jan 22 04:36:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:28 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efddc002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:29 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 15 completed events
Jan 22 04:36:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:36:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:29.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:29 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80045f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:29 np0005591760 podman[103672]: 2026-01-22 09:36:29.647692862 +0000 UTC m=+0.052058859 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:29 np0005591760 podman[103672]: 2026-01-22 09:36:29.741466466 +0000 UTC m=+0.145832463 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Jan 22 04:36:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v24: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:30 np0005591760 podman[103771]: 2026-01-22 09:36:30.122176048 +0000 UTC m=+0.045910568 container exec 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:30 np0005591760 podman[103771]: 2026-01-22 09:36:30.129751549 +0000 UTC m=+0.053486059 container exec_died 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:30 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80045f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:30 np0005591760 podman[103856]: 2026-01-22 09:36:30.498741975 +0000 UTC m=+0.042733197 container exec 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:30 np0005591760 podman[103856]: 2026-01-22 09:36:30.525636011 +0000 UTC m=+0.069627242 container exec_died 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:30.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:30 np0005591760 podman[103915]: 2026-01-22 09:36:30.715768812 +0000 UTC m=+0.045947578 container exec 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:30 np0005591760 podman[103915]: 2026-01-22 09:36:30.864721186 +0000 UTC m=+0.194899952 container exec_died 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:30 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4002bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:31 np0005591760 podman[103974]: 2026-01-22 09:36:31.076896725 +0000 UTC m=+0.048230781 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:36:31 np0005591760 podman[103974]: 2026-01-22 09:36:31.110025183 +0000 UTC m=+0.081359240 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:36:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:36:31 np0005591760 podman[104025]: 2026-01-22 09:36:31.308934341 +0000 UTC m=+0.047632013 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.expose-services=, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 04:36:31 np0005591760 podman[104025]: 2026-01-22 09:36:31.323298592 +0000 UTC m=+0.061996254 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.expose-services=, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 04:36:31 np0005591760 podman[104076]: 2026-01-22 09:36:31.521969872 +0000 UTC m=+0.046247503 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 podman[104076]: 2026-01-22 09:36:31.566519633 +0000 UTC m=+0.090797254 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:31 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4002bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:31 np0005591760 podman[104124]: 2026-01-22 09:36:31.715743498 +0000 UTC m=+0.046202157 container exec 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:36:31 np0005591760 podman[104124]: 2026-01-22 09:36:31.728019683 +0000 UTC m=+0.058478322 container exec_died 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v25: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:36:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:32 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:32 np0005591760 podman[104265]: 2026-01-22 09:36:32.45958446 +0000 UTC m=+0.036656150 container create 269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:32 np0005591760 systemd[1]: Started libpod-conmon-269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5.scope.
Jan 22 04:36:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:32 np0005591760 podman[104265]: 2026-01-22 09:36:32.523718614 +0000 UTC m=+0.100790304 container init 269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_varahamihira, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:32 np0005591760 podman[104265]: 2026-01-22 09:36:32.530813229 +0000 UTC m=+0.107884919 container start 269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:32 np0005591760 podman[104265]: 2026-01-22 09:36:32.532211104 +0000 UTC m=+0.109282804 container attach 269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:36:32 np0005591760 ecstatic_varahamihira[104278]: 167 167
Jan 22 04:36:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:32.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:32 np0005591760 podman[104265]: 2026-01-22 09:36:32.445270052 +0000 UTC m=+0.022341762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:32 np0005591760 systemd[1]: libpod-269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5.scope: Deactivated successfully.
Jan 22 04:36:32 np0005591760 podman[104283]: 2026-01-22 09:36:32.586564818 +0000 UTC m=+0.030226347 container died 269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6321d1aa49e99108402ee749741589a8027140ec3d5e1bfdbe35e083a89bc57b-merged.mount: Deactivated successfully.
Jan 22 04:36:32 np0005591760 podman[104283]: 2026-01-22 09:36:32.611326894 +0000 UTC m=+0.054988392 container remove 269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:32 np0005591760 systemd[1]: libpod-conmon-269900cbc9898468c8cf104ce9516a9d770cbfe426055e7783462e98ddbb69f5.scope: Deactivated successfully.
Jan 22 04:36:32 np0005591760 podman[104303]: 2026-01-22 09:36:32.76411576 +0000 UTC m=+0.041901209 container create fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ganguly, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:32 np0005591760 systemd[1]: Started libpod-conmon-fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7.scope.
Jan 22 04:36:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d766cd2809af96271f38204a3ce4f2003887e4d6d59d502fccee55385d190b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d766cd2809af96271f38204a3ce4f2003887e4d6d59d502fccee55385d190b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d766cd2809af96271f38204a3ce4f2003887e4d6d59d502fccee55385d190b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d766cd2809af96271f38204a3ce4f2003887e4d6d59d502fccee55385d190b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d766cd2809af96271f38204a3ce4f2003887e4d6d59d502fccee55385d190b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:32 np0005591760 podman[104303]: 2026-01-22 09:36:32.839329232 +0000 UTC m=+0.117114701 container init fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:36:32 np0005591760 podman[104303]: 2026-01-22 09:36:32.748968803 +0000 UTC m=+0.026754272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:32 np0005591760 podman[104303]: 2026-01-22 09:36:32.845867067 +0000 UTC m=+0.123652526 container start fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:32 np0005591760 podman[104303]: 2026-01-22 09:36:32.847193167 +0000 UTC m=+0.124978616 container attach fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:32 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:32 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:32 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:32 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:36:32 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:32 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:32 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:36:33 np0005591760 priceless_ganguly[104316]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:36:33 np0005591760 priceless_ganguly[104316]: --> All data devices are unavailable
Jan 22 04:36:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:33 np0005591760 systemd[1]: libpod-fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7.scope: Deactivated successfully.
Jan 22 04:36:33 np0005591760 conmon[104316]: conmon fdf591ee322f7093ca9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7.scope/container/memory.events
Jan 22 04:36:33 np0005591760 podman[104303]: 2026-01-22 09:36:33.140166036 +0000 UTC m=+0.417951486 container died fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:36:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c8d766cd2809af96271f38204a3ce4f2003887e4d6d59d502fccee55385d190b-merged.mount: Deactivated successfully.
Jan 22 04:36:33 np0005591760 podman[104303]: 2026-01-22 09:36:33.166753031 +0000 UTC m=+0.444538481 container remove fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ganguly, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:33 np0005591760 systemd[1]: libpod-conmon-fdf591ee322f7093ca9dab4d677e911605da12a813fc13280eef978f0d7d7ad7.scope: Deactivated successfully.
Jan 22 04:36:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:33 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.685322293 +0000 UTC m=+0.034424332 container create 51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:36:33 np0005591760 systemd[1]: Started libpod-conmon-51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587.scope.
Jan 22 04:36:33 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.741877307 +0000 UTC m=+0.090979356 container init 51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.747963953 +0000 UTC m=+0.097065982 container start 51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.749416662 +0000 UTC m=+0.098518781 container attach 51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:36:33 np0005591760 bold_tu[104437]: 167 167
Jan 22 04:36:33 np0005591760 systemd[1]: libpod-51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587.scope: Deactivated successfully.
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.752726742 +0000 UTC m=+0.101828922 container died 51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:36:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a16f4b144e6866b265d0c7bbd8baca3f3775a461430645f069568bfecaf3fcd7-merged.mount: Deactivated successfully.
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.671557502 +0000 UTC m=+0.020659531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:33 np0005591760 podman[104424]: 2026-01-22 09:36:33.775256238 +0000 UTC m=+0.124358267 container remove 51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_tu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:36:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v26: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:33 np0005591760 systemd[1]: libpod-conmon-51763ff06ab3e5828b78d10eec3f733fe1cba1d5156b1e0c3a283db818b0d587.scope: Deactivated successfully.
Jan 22 04:36:33 np0005591760 podman[104459]: 2026-01-22 09:36:33.945038012 +0000 UTC m=+0.037692304 container create 018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_haslett, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:33 np0005591760 systemd[1]: Started libpod-conmon-018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b.scope.
Jan 22 04:36:34 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea899f56d1fbf1ffb5438535ab3034b01c1c129eae3197b0406d17a8198a8d03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea899f56d1fbf1ffb5438535ab3034b01c1c129eae3197b0406d17a8198a8d03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea899f56d1fbf1ffb5438535ab3034b01c1c129eae3197b0406d17a8198a8d03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea899f56d1fbf1ffb5438535ab3034b01c1c129eae3197b0406d17a8198a8d03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:34 np0005591760 podman[104459]: 2026-01-22 09:36:34.018068337 +0000 UTC m=+0.110722639 container init 018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_haslett, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:36:34 np0005591760 podman[104459]: 2026-01-22 09:36:34.023301192 +0000 UTC m=+0.115955474 container start 018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:34 np0005591760 podman[104459]: 2026-01-22 09:36:33.929553127 +0000 UTC m=+0.022207430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:34 np0005591760 podman[104459]: 2026-01-22 09:36:34.026551891 +0000 UTC m=+0.119206193 container attach 018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_haslett, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]: {
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:    "0": [
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:        {
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "devices": [
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "/dev/loop3"
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            ],
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "lv_name": "ceph_lv0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "lv_size": "21470642176",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "name": "ceph_lv0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "tags": {
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.cluster_name": "ceph",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.crush_device_class": "",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.encrypted": "0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.osd_id": "0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.type": "block",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.vdo": "0",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:                "ceph.with_tpm": "0"
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            },
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "type": "block",
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:            "vg_name": "ceph_vg0"
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:        }
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]:    ]
Jan 22 04:36:34 np0005591760 intelligent_haslett[104474]: }
Jan 22 04:36:34 np0005591760 systemd[1]: libpod-018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b.scope: Deactivated successfully.
Jan 22 04:36:34 np0005591760 podman[104459]: 2026-01-22 09:36:34.293166015 +0000 UTC m=+0.385820298 container died 018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_haslett, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:36:34 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ea899f56d1fbf1ffb5438535ab3034b01c1c129eae3197b0406d17a8198a8d03-merged.mount: Deactivated successfully.
Jan 22 04:36:34 np0005591760 podman[104459]: 2026-01-22 09:36:34.320583816 +0000 UTC m=+0.413238099 container remove 018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_haslett, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:34 np0005591760 systemd[1]: libpod-conmon-018a78b669fe50aec558ab8e4c07b50e5a48301e61873ccb5731d073a4fb684b.scope: Deactivated successfully.
Jan 22 04:36:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:34 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000020s ======
Jan 22 04:36:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:34.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.848381216 +0000 UTC m=+0.037768456 container create fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:34 np0005591760 systemd[1]: Started libpod-conmon-fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555.scope.
Jan 22 04:36:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:34 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4003d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:34 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.918042741 +0000 UTC m=+0.107430000 container init fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.925925151 +0000 UTC m=+0.115312391 container start fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_stonebraker, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.831896467 +0000 UTC m=+0.021283727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.927534185 +0000 UTC m=+0.116921425 container attach fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:36:34 np0005591760 jovial_stonebraker[104588]: 167 167
Jan 22 04:36:34 np0005591760 systemd[1]: libpod-fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555.scope: Deactivated successfully.
Jan 22 04:36:34 np0005591760 conmon[104588]: conmon fec63d5dd54dc8690b12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555.scope/container/memory.events
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.931907169 +0000 UTC m=+0.121294409 container died fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:34 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1ff30e02b2a0b5cb6a85ccbaf90a668d1d91502fe8a5d43654422ce2407dfb2c-merged.mount: Deactivated successfully.
Jan 22 04:36:34 np0005591760 podman[104575]: 2026-01-22 09:36:34.960686528 +0000 UTC m=+0.150073768 container remove fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 04:36:34 np0005591760 systemd[1]: libpod-conmon-fec63d5dd54dc8690b1240187d93690534f52947b00482d893d35be27e02f555.scope: Deactivated successfully.
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.109818751 +0000 UTC m=+0.039995927 container create 03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:36:35 np0005591760 systemd[1]: Started libpod-conmon-03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31.scope.
Jan 22 04:36:35 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c494c81f451c2d77da0d91fdad4096c8f873d655a279237e6bc8a30397b6c08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c494c81f451c2d77da0d91fdad4096c8f873d655a279237e6bc8a30397b6c08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c494c81f451c2d77da0d91fdad4096c8f873d655a279237e6bc8a30397b6c08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:35 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c494c81f451c2d77da0d91fdad4096c8f873d655a279237e6bc8a30397b6c08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.188638081 +0000 UTC m=+0.118815277 container init 03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.094866461 +0000 UTC m=+0.025043657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.195153474 +0000 UTC m=+0.125330651 container start 03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.1964297 +0000 UTC m=+0.126606876 container attach 03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:36:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:35.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:35 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v27: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:35 np0005591760 reverent_jones[104623]: {}
Jan 22 04:36:35 np0005591760 lvm[104700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:36:35 np0005591760 lvm[104700]: VG ceph_vg0 finished
Jan 22 04:36:35 np0005591760 systemd[1]: libpod-03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31.scope: Deactivated successfully.
Jan 22 04:36:35 np0005591760 systemd[1]: libpod-03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31.scope: Consumed 1.046s CPU time.
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.835769964 +0000 UTC m=+0.765947140 container died 03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:36:35 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3c494c81f451c2d77da0d91fdad4096c8f873d655a279237e6bc8a30397b6c08-merged.mount: Deactivated successfully.
Jan 22 04:36:35 np0005591760 podman[104610]: 2026-01-22 09:36:35.869754636 +0000 UTC m=+0.799931812 container remove 03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:35 np0005591760 systemd[1]: libpod-conmon-03f2416401f845335b57fc9f00c129ecb5c50a0469c3005dd881e7eeab167e31.scope: Deactivated successfully.
Jan 22 04:36:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:36 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 22 04:36:36 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 22 04:36:36 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 22 04:36:36 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 22 04:36:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:36 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:36 np0005591760 systemd-logind[747]: New session 38 of user zuul.
Jan 22 04:36:36 np0005591760 systemd[1]: Started Session 38 of User zuul.
Jan 22 04:36:36 np0005591760 systemd[1]: Stopping Ceph node-exporter.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000020s ======
Jan 22 04:36:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Jan 22 04:36:36 np0005591760 podman[104881]: 2026-01-22 09:36:36.649637348 +0000 UTC m=+0.047593781 container died 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:36 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5e406c006b115af27979891aa34b688c048909e5d99ce9707efc38daf5cb2c46-merged.mount: Deactivated successfully.
Jan 22 04:36:36 np0005591760 podman[104881]: 2026-01-22 09:36:36.674701692 +0000 UTC m=+0.072658135 container remove 51f54375f606bf6ca8d79cdd131b50b6240b45b83fcba0945907323b8c712246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:36 np0005591760 bash[104881]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0
Jan 22 04:36:36 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Jan 22 04:36:36 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@node-exporter.compute-0.service: Failed with result 'exit-code'.
Jan 22 04:36:36 np0005591760 systemd[1]: Stopped Ceph node-exporter.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:36 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@node-exporter.compute-0.service: Consumed 1.625s CPU time.
Jan 22 04:36:36 np0005591760 systemd[1]: Starting Ceph node-exporter.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:36 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:36 np0005591760 ceph-mon[74254]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 22 04:36:36 np0005591760 ceph-mon[74254]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 22 04:36:37 np0005591760 python3.9[105032]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 04:36:37 np0005591760 podman[105064]: 2026-01-22 09:36:37.048183888 +0000 UTC m=+0.036211892 container create e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47629bd1c9872d78d136007b9d4089b58a6ba862e163803dcabf0c2a830721e/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:37 np0005591760 podman[105064]: 2026-01-22 09:36:37.098936673 +0000 UTC m=+0.086964687 container init e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105064]: 2026-01-22 09:36:37.104321706 +0000 UTC m=+0.092349700 container start e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 bash[105064]: e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce
Jan 22 04:36:37 np0005591760 podman[105064]: 2026-01-22 09:36:37.032460805 +0000 UTC m=+0.020488809 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 22 04:36:37 np0005591760 systemd[1]: Started Ceph node-exporter.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.114Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.115Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.115Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.115Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=arp
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=bcache
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=bonding
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=cpu
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=dmi
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=edac
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=entropy
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=filefd
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.117Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=netclass
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=netdev
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=netstat
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=nfs
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=nvme
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=os
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=pressure
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=rapl
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=selinux
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=softnet
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=stat
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=textfile
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=time
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=uname
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=xfs
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=node_exporter.go:117 level=info collector=zfs
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0[105083]: ts=2026-01-22T09:36:37.118Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 22 04:36:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 22 04:36:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 22 04:36:37 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 22 04:36:37 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 22 04:36:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.567489395 +0000 UTC m=+0.029978258 volume create b4473889875aa28314fd399b2c8e565b6658d12e471c73c71b6cf36affe13bcb
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.573585945 +0000 UTC m=+0.036074808 container create 6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:37 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4004970 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:37 np0005591760 systemd[1]: Started libpod-conmon-6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4.scope.
Jan 22 04:36:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:37] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:37] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:37 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2881930065e4cec29bbe1a3a909d74e9d1090a03087163adfadec5d99d91e67/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.632246737 +0000 UTC m=+0.094735610 container init 6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.638359247 +0000 UTC m=+0.100848111 container start 6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 unruffled_wozniak[105286]: 65534 65534
Jan 22 04:36:37 np0005591760 systemd[1]: libpod-6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4.scope: Deactivated successfully.
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.641714178 +0000 UTC m=+0.104203041 container attach 6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.641932061 +0000 UTC m=+0.104420923 container died 6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.556394062 +0000 UTC m=+0.018882945 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 22 04:36:37 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d2881930065e4cec29bbe1a3a909d74e9d1090a03087163adfadec5d99d91e67-merged.mount: Deactivated successfully.
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.664208889 +0000 UTC m=+0.126697752 container remove 6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105250]: 2026-01-22 09:36:37.6667553 +0000 UTC m=+0.129244173 volume remove b4473889875aa28314fd399b2c8e565b6658d12e471c73c71b6cf36affe13bcb
Jan 22 04:36:37 np0005591760 systemd[1]: libpod-conmon-6ffa271d3b7121ac8a059f7d5b0bae690e4d305e71008e75ba98cf6ef067b6c4.scope: Deactivated successfully.
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.720699393 +0000 UTC m=+0.032129881 volume create c7200cf70581b50f289d29a29bdb1d543f8598bdf839d1c5d0b9ce2f5625df42
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.728772495 +0000 UTC m=+0.040202993 container create 3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_albattani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 systemd[1]: Started libpod-conmon-3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed.scope.
Jan 22 04:36:37 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v28: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25dabb61ad691afc3087306009955b4e6205bc78e90cafce390ab6fce908fb00/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.790388 +0000 UTC m=+0.101818507 container init 3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_albattani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.795548929 +0000 UTC m=+0.106979416 container start 3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_albattani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 elated_albattani[105366]: 65534 65534
Jan 22 04:36:37 np0005591760 systemd[1]: libpod-3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed.scope: Deactivated successfully.
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.797309291 +0000 UTC m=+0.108739779 container attach 3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_albattani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.797564413 +0000 UTC m=+0.108994901 container died 3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_albattani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.71035422 +0000 UTC m=+0.021784728 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 22 04:36:37 np0005591760 systemd[1]: var-lib-containers-storage-overlay-25dabb61ad691afc3087306009955b4e6205bc78e90cafce390ab6fce908fb00-merged.mount: Deactivated successfully.
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.820115512 +0000 UTC m=+0.131545999 container remove 3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_albattani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:37 np0005591760 podman[105346]: 2026-01-22 09:36:37.822853194 +0000 UTC m=+0.134283692 volume remove c7200cf70581b50f289d29a29bdb1d543f8598bdf839d1c5d0b9ce2f5625df42
Jan 22 04:36:37 np0005591760 systemd[1]: libpod-conmon-3df3b6f77aa39b4d6b0072de6e3af24d4cf9c4b7a65d2bc34b066da72d67d6ed.scope: Deactivated successfully.
Jan 22 04:36:37 np0005591760 systemd[1]: Stopping Ceph alertmanager.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:37 np0005591760 python3.9[105353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[96607]: ts=2026-01-22T09:36:38.010Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Jan 22 04:36:38 np0005591760 podman[105408]: 2026-01-22 09:36:38.020452259 +0000 UTC m=+0.039394292 container died 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:38 np0005591760 podman[105408]: 2026-01-22 09:36:38.039321617 +0000 UTC m=+0.058263650 container remove 065a60b194a9778f2f8646ef2f22164f095837f4040ec6eece4458eea7fc026d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:38 np0005591760 podman[105408]: 2026-01-22 09:36:38.04146273 +0000 UTC m=+0.060404764 volume remove 120bc6bac3c0203fa6a9463fa695180f3a238cdf155b832382897c374633ceca
Jan 22 04:36:38 np0005591760 bash[105408]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0
Jan 22 04:36:38 np0005591760 systemd[1]: var-lib-containers-storage-overlay-63c3e7be19abe1bac5dd2f4deb46b4ef746f96aa4662a2aa6c2367a1f168e3a1-merged.mount: Deactivated successfully.
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:38 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@alertmanager.compute-0.service: Deactivated successfully.
Jan 22 04:36:38 np0005591760 systemd[1]: Stopped Ceph alertmanager.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:38 np0005591760 systemd[1]: Starting Ceph alertmanager.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 22 04:36:38 np0005591760 podman[105518]: 2026-01-22 09:36:38.34533818 +0000 UTC m=+0.037390299 volume create aabb2ddc4b9c19e4f55ba543f956d5370fd7425d1a91bf5fc9d90e4ca3c9529f
Jan 22 04:36:38 np0005591760 podman[105518]: 2026-01-22 09:36:38.351654616 +0000 UTC m=+0.043706735 container create 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:38 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4004970 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2761756a90adfc7b81b0de10b47b045b74729340187be36600a41cbc10e416f9/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2761756a90adfc7b81b0de10b47b045b74729340187be36600a41cbc10e416f9/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:38 np0005591760 podman[105518]: 2026-01-22 09:36:38.390712621 +0000 UTC m=+0.082764760 container init 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:38 np0005591760 podman[105518]: 2026-01-22 09:36:38.396536665 +0000 UTC m=+0.088588785 container start 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:38 np0005591760 bash[105518]: 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8
Jan 22 04:36:38 np0005591760 podman[105518]: 2026-01-22 09:36:38.33279709 +0000 UTC m=+0.024849219 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 22 04:36:38 np0005591760 systemd[1]: Started Ceph alertmanager.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.422Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.423Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.428Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.26.184 port=9094
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.431Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 22 04:36:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.460Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.461Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.464Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:38.464Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 22 04:36:38 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Jan 22 04:36:38 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Jan 22 04:36:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:38 np0005591760 python3.9[105727]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:36:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:38 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:38 np0005591760 podman[105764]: 2026-01-22 09:36:38.915904621 +0000 UTC m=+0.034998600 container create 38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_panini, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:38 np0005591760 systemd[1]: Started libpod-conmon-38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed.scope.
Jan 22 04:36:38 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:38 np0005591760 podman[105764]: 2026-01-22 09:36:38.97438454 +0000 UTC m=+0.093478520 container init 38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_panini, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:38 np0005591760 podman[105764]: 2026-01-22 09:36:38.979428117 +0000 UTC m=+0.098522097 container start 38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_panini, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:38 np0005591760 podman[105764]: 2026-01-22 09:36:38.980570449 +0000 UTC m=+0.099664430 container attach 38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_panini, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:38 np0005591760 nostalgic_panini[105781]: 472 0
Jan 22 04:36:38 np0005591760 systemd[1]: libpod-38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed.scope: Deactivated successfully.
Jan 22 04:36:38 np0005591760 podman[105764]: 2026-01-22 09:36:38.98204438 +0000 UTC m=+0.101138370 container died 38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_panini, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:38 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8ec76e2df05962feb0478aed311d6d07e734d65630f2f06d6c066a0b72dfb723-merged.mount: Deactivated successfully.
Jan 22 04:36:38 np0005591760 podman[105764]: 2026-01-22 09:36:38.901189245 +0000 UTC m=+0.020283245 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 22 04:36:39 np0005591760 podman[105764]: 2026-01-22 09:36:39.004174521 +0000 UTC m=+0.123268501 container remove 38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_panini, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 systemd[1]: libpod-conmon-38c014d5d05257d53db3ec7f94687e53b892ef0443181397b60e8a10502d13ed.scope: Deactivated successfully.
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.058847754 +0000 UTC m=+0.033824489 container create 25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e (image=quay.io/ceph/grafana:10.4.0, name=unruffled_dubinsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 systemd[1]: Started libpod-conmon-25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e.scope.
Jan 22 04:36:39 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.10538671 +0000 UTC m=+0.080363455 container init 25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e (image=quay.io/ceph/grafana:10.4.0, name=unruffled_dubinsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.113037272 +0000 UTC m=+0.088014007 container start 25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e (image=quay.io/ceph/grafana:10.4.0, name=unruffled_dubinsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.114191797 +0000 UTC m=+0.089168532 container attach 25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e (image=quay.io/ceph/grafana:10.4.0, name=unruffled_dubinsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 unruffled_dubinsky[105821]: 472 0
Jan 22 04:36:39 np0005591760 systemd[1]: libpod-25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e.scope: Deactivated successfully.
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.1156619 +0000 UTC m=+0.090638635 container died 25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e (image=quay.io/ceph/grafana:10.4.0, name=unruffled_dubinsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 systemd[1]: var-lib-containers-storage-overlay-124bf0d704c2864a3d71346068a93b7ca44eb2cd4f5bc721c4650eb35606b555-merged.mount: Deactivated successfully.
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.136225476 +0000 UTC m=+0.111202211 container remove 25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e (image=quay.io/ceph/grafana:10.4.0, name=unruffled_dubinsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 podman[105795]: 2026-01-22 09:36:39.043626552 +0000 UTC m=+0.018603287 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 22 04:36:39 np0005591760 systemd[1]: libpod-conmon-25fb4aeab2529408b4ebdb42c692439cc5b4a5cf861e489f00e9bdedb88f440e.scope: Deactivated successfully.
Jan 22 04:36:39 np0005591760 systemd[1]: Stopping Ceph grafana.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:39.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=server t=2026-01-22T09:36:39.330522249Z level=info msg="Shutdown started" reason="System signal: terminated"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=grafana-apiserver t=2026-01-22T09:36:39.331262059Z level=info msg="StorageObjectCountTracker pruner is exiting"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=tracing t=2026-01-22T09:36:39.331276347Z level=info msg="Closing tracing"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=ticker t=2026-01-22T09:36:39.331525299Z level=info msg=stopped last_tick=2026-01-22T09:36:30Z
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[97205]: logger=sqlstore.transactions t=2026-01-22T09:36:39.342870714Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 22 04:36:39 np0005591760 podman[105918]: 2026-01-22 09:36:39.3521136 +0000 UTC m=+0.047359770 container died 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 systemd[1]: var-lib-containers-storage-overlay-aefb6c753069f7f794f2fb13c3601a582a95ad793ea1c99c0ee4b1eefa1be134-merged.mount: Deactivated successfully.
Jan 22 04:36:39 np0005591760 podman[105918]: 2026-01-22 09:36:39.381643379 +0000 UTC m=+0.076889529 container remove 69534e59d7e144792f090c9414441b1986b3d2a1dd2f4af9ffb4fbe51024b14e (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 bash[105918]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: Reconfiguring daemon grafana.compute-0 on compute-0
Jan 22 04:36:39 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@grafana.compute-0.service: Deactivated successfully.
Jan 22 04:36:39 np0005591760 systemd[1]: Stopped Ceph grafana.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:39 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@grafana.compute-0.service: Consumed 3.767s CPU time.
Jan 22 04:36:39 np0005591760 systemd[1]: Starting Ceph grafana.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:36:39 np0005591760 python3.9[106005]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:39 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:39 np0005591760 podman[106061]: 2026-01-22 09:36:39.654493609 +0000 UTC m=+0.033695113 container create 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e01c99af16affaf66520e86d7bc1a844eeb3a17b1b117c6633d072bc8792344/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e01c99af16affaf66520e86d7bc1a844eeb3a17b1b117c6633d072bc8792344/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e01c99af16affaf66520e86d7bc1a844eeb3a17b1b117c6633d072bc8792344/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e01c99af16affaf66520e86d7bc1a844eeb3a17b1b117c6633d072bc8792344/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e01c99af16affaf66520e86d7bc1a844eeb3a17b1b117c6633d072bc8792344/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:39 np0005591760 podman[106061]: 2026-01-22 09:36:39.690346386 +0000 UTC m=+0.069547890 container init 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 podman[106061]: 2026-01-22 09:36:39.696387873 +0000 UTC m=+0.075589377 container start 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:39 np0005591760 bash[106061]: 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0
Jan 22 04:36:39 np0005591760 podman[106061]: 2026-01-22 09:36:39.641139371 +0000 UTC m=+0.020340895 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 22 04:36:39 np0005591760 systemd[1]: Started Ceph grafana.compute-0 for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:39 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring rgw.rgw.compute-2.aqqfbf (unknown last config time)...
Jan 22 04:36:39 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring rgw.rgw.compute-2.aqqfbf (unknown last config time)...
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 22 04:36:39 np0005591760 ceph-mgr[74522]: [cephadm INFO cephadm.serve] Reconfiguring daemon rgw.rgw.compute-2.aqqfbf on compute-2
Jan 22 04:36:39 np0005591760 ceph-mgr[74522]: log_channel(cephadm) log [INF] : Reconfiguring daemon rgw.rgw.compute-2.aqqfbf on compute-2
Jan 22 04:36:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v29: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828389796Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-22T09:36:39Z
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.82860879Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828616014Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828619921Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828623438Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828626503Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828629509Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828637073Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.82864042Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828647443Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828650359Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828653164Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828656199Z level=info msg=Target target=[all]
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.82866138Z level=info msg="Path Home" path=/usr/share/grafana
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828664526Z level=info msg="Path Data" path=/var/lib/grafana
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828667531Z level=info msg="Path Logs" path=/var/log/grafana
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828670237Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828673052Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=settings t=2026-01-22T09:36:39.828675857Z level=info msg="App mode production"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=sqlstore t=2026-01-22T09:36:39.828963572Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=sqlstore t=2026-01-22T09:36:39.82897874Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=migrator t=2026-01-22T09:36:39.829590198Z level=info msg="Starting DB migrations"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=migrator t=2026-01-22T09:36:39.843996347Z level=info msg="migrations completed" performed=0 skipped=547 duration=651.172µs
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=sqlstore t=2026-01-22T09:36:39.844848891Z level=info msg="Created default organization"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=secrets t=2026-01-22T09:36:39.845246924Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=plugin.store t=2026-01-22T09:36:39.865803207Z level=info msg="Loading plugins..."
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=local.finder t=2026-01-22T09:36:39.925428344Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=plugin.store t=2026-01-22T09:36:39.925483308Z level=info msg="Plugins loaded" count=55 duration=59.680923ms
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=query_data t=2026-01-22T09:36:39.9277403Z level=info msg="Query Service initialization"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=live.push_http t=2026-01-22T09:36:39.930857812Z level=info msg="Live Push Gateway initialization"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ngalert.migration t=2026-01-22T09:36:39.932666896Z level=info msg=Starting
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ngalert.state.manager t=2026-01-22T09:36:39.939965532Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=infra.usagestats.collector t=2026-01-22T09:36:39.941527288Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=provisioning.datasources t=2026-01-22T09:36:39.943598349Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=provisioning.alerting t=2026-01-22T09:36:39.960522626Z level=info msg="starting to provision alerting"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=provisioning.alerting t=2026-01-22T09:36:39.960539548Z level=info msg="finished to provision alerting"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafanaStorageLogger t=2026-01-22T09:36:39.96071537Z level=info msg="Storage starting"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ngalert.state.manager t=2026-01-22T09:36:39.961176553Z level=info msg="Warming state cache for startup"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ngalert.multiorg.alertmanager t=2026-01-22T09:36:39.961384788Z level=info msg="Starting MultiOrg Alertmanager"
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=http.server t=2026-01-22T09:36:39.969517112Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=http.server t=2026-01-22T09:36:39.970919776Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 22 04:36:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=provisioning.dashboard t=2026-01-22T09:36:39.994655898Z level=info msg="starting to provision dashboards"
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ngalert.state.manager t=2026-01-22T09:36:40.000232173Z level=info msg="State cache has been initialized" states=0 duration=39.053486ms
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ngalert.scheduler t=2026-01-22T09:36:40.000264094Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=ticker t=2026-01-22T09:36:40.000327153Z level=info msg=starting first_tick=2026-01-22T09:36:50Z
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=provisioning.dashboard t=2026-01-22T09:36:40.007639496Z level=info msg="finished to provision dashboards"
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafana.update.checker t=2026-01-22T09:36:40.020335889Z level=info msg="Update check succeeded" duration=57.166042ms
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=plugins.update.checker t=2026-01-22T09:36:40.021213039Z level=info msg="Update check succeeded" duration=58.03102ms
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafana-apiserver t=2026-01-22T09:36:40.165447156Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafana-apiserver t=2026-01-22T09:36:40.165886739Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO root] Restarting engine...
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:36:40] ENGINE Bus STOPPING
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:36:40] ENGINE Bus STOPPING
Jan 22 04:36:40 np0005591760 python3.9[106249]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:40 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4004970 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:40.431Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000565261s
Jan 22 04:36:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:36:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:36:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:36:40] ENGINE Bus STOPPED
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:36:40] ENGINE Bus STOPPED
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:36:40] ENGINE Bus STARTING
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:36:40] ENGINE Bus STARTING
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:36:40] ENGINE Serving on http://:::9283
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:36:40] ENGINE Serving on http://:::9283
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: [22/Jan/2026:09:36:40] ENGINE Bus STARTED
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.error] [22/Jan/2026:09:36:40] ENGINE Bus STARTED
Jan 22 04:36:40 np0005591760 ceph-mgr[74522]: [prometheus INFO root] Engine started.
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: Reconfiguring rgw.rgw.compute-2.aqqfbf (unknown last config time)...
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aqqfbf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: Reconfiguring daemon rgw.rgw.compute-2.aqqfbf on compute-2
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 22 04:36:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:40 np0005591760 podman[106519]: 2026-01-22 09:36:40.809384435 +0000 UTC m=+0.044741314 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:40 np0005591760 python3.9[106497]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:36:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:40 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4004970 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:40 np0005591760 podman[106544]: 2026-01-22 09:36:40.953860971 +0000 UTC m=+0.048886741 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:40 np0005591760 podman[106519]: 2026-01-22 09:36:40.957372758 +0000 UTC m=+0.192729638 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:36:41 np0005591760 podman[106739]: 2026-01-22 09:36:41.28050493 +0000 UTC m=+0.038709756 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:41.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:41 np0005591760 podman[106739]: 2026-01-22 09:36:41.321068204 +0000 UTC m=+0.079273030 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:41 np0005591760 python3.9[106777]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:36:41 np0005591760 network[106849]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:36:41 np0005591760 network[106851]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:36:41 np0005591760 network[106852]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:36:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:41 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:41 np0005591760 podman[106877]: 2026-01-22 09:36:41.595195241 +0000 UTC m=+0.039576768 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:41 np0005591760 podman[106877]: 2026-01-22 09:36:41.619959169 +0000 UTC m=+0.064340674 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v30: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:36:42 np0005591760 podman[106945]: 2026-01-22 09:36:42.144193946 +0000 UTC m=+0.038703976 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:42 np0005591760 podman[106945]: 2026-01-22 09:36:42.289900772 +0000 UTC m=+0.184410801 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:36:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:42 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:42 np0005591760 podman[107025]: 2026-01-22 09:36:42.439096411 +0000 UTC m=+0.036716013 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:36:42 np0005591760 podman[107025]: 2026-01-22 09:36:42.447963926 +0000 UTC m=+0.045583509 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:36:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:42.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:42 np0005591760 podman[107090]: 2026-01-22 09:36:42.612740239 +0000 UTC m=+0.039932401 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Jan 22 04:36:42 np0005591760 podman[107090]: 2026-01-22 09:36:42.620762765 +0000 UTC m=+0.047954927 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, architecture=x86_64, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, build-date=2023-02-22T09:23:20)
Jan 22 04:36:42 np0005591760 podman[107156]: 2026-01-22 09:36:42.78901151 +0000 UTC m=+0.036409532 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:42 np0005591760 podman[107186]: 2026-01-22 09:36:42.873871716 +0000 UTC m=+0.047511838 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:42 np0005591760 podman[107156]: 2026-01-22 09:36:42.878863145 +0000 UTC m=+0.126261186 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:36:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:42 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:43 np0005591760 podman[107221]: 2026-01-22 09:36:43.001643511 +0000 UTC m=+0.044140116 container exec 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:36:43 np0005591760 podman[107221]: 2026-01-22 09:36:43.036589723 +0000 UTC m=+0.079086318 container exec_died 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:36:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:43 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4005e60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.728970732 +0000 UTC m=+0.036972688 container create bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:43 np0005591760 systemd[1]: Started libpod-conmon-bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939.scope.
Jan 22 04:36:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v31: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:43 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.795818311 +0000 UTC m=+0.103820287 container init bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.801861481 +0000 UTC m=+0.109863437 container start bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.803413248 +0000 UTC m=+0.111415215 container attach bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:43 np0005591760 beautiful_gates[107423]: 167 167
Jan 22 04:36:43 np0005591760 systemd[1]: libpod-bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939.scope: Deactivated successfully.
Jan 22 04:36:43 np0005591760 conmon[107423]: conmon bd6a60eb0630ddaa1750 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939.scope/container/memory.events
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.807997796 +0000 UTC m=+0.115999752 container died bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.714290784 +0000 UTC m=+0.022292760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:43 np0005591760 systemd[1]: var-lib-containers-storage-overlay-10fdd504f87010e656d8801878e359f308a6605ba5e54369d0e4e196141d924e-merged.mount: Deactivated successfully.
Jan 22 04:36:43 np0005591760 podman[107409]: 2026-01-22 09:36:43.832949899 +0000 UTC m=+0.140951856 container remove bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:43 np0005591760 systemd[1]: libpod-conmon-bd6a60eb0630ddaa175017e737ec09e0688ff24b5b7cf469b25471f7d6606939.scope: Deactivated successfully.
Jan 22 04:36:43 np0005591760 podman[107445]: 2026-01-22 09:36:43.97493011 +0000 UTC m=+0.035883246 container create c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:44 np0005591760 systemd[1]: Started libpod-conmon-c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553.scope.
Jan 22 04:36:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67cd89545d57d2c1790591fd00f4fa82b7ed72dcd4df4cde40a60c125cfbf53f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67cd89545d57d2c1790591fd00f4fa82b7ed72dcd4df4cde40a60c125cfbf53f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67cd89545d57d2c1790591fd00f4fa82b7ed72dcd4df4cde40a60c125cfbf53f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67cd89545d57d2c1790591fd00f4fa82b7ed72dcd4df4cde40a60c125cfbf53f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67cd89545d57d2c1790591fd00f4fa82b7ed72dcd4df4cde40a60c125cfbf53f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:44 np0005591760 podman[107445]: 2026-01-22 09:36:43.961558869 +0000 UTC m=+0.022512024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:44 np0005591760 podman[107445]: 2026-01-22 09:36:44.062671399 +0000 UTC m=+0.123624535 container init c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:44 np0005591760 podman[107445]: 2026-01-22 09:36:44.071904036 +0000 UTC m=+0.132857172 container start c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 04:36:44 np0005591760 podman[107445]: 2026-01-22 09:36:44.073796819 +0000 UTC m=+0.134749955 container attach c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_allen, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:44 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:36:44 np0005591760 cranky_allen[107458]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:36:44 np0005591760 cranky_allen[107458]: --> All data devices are unavailable
Jan 22 04:36:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:44 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:44 np0005591760 systemd[1]: libpod-c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553.scope: Deactivated successfully.
Jan 22 04:36:44 np0005591760 podman[107445]: 2026-01-22 09:36:44.383256609 +0000 UTC m=+0.444209745 container died c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:44 np0005591760 systemd[1]: var-lib-containers-storage-overlay-67cd89545d57d2c1790591fd00f4fa82b7ed72dcd4df4cde40a60c125cfbf53f-merged.mount: Deactivated successfully.
Jan 22 04:36:44 np0005591760 podman[107445]: 2026-01-22 09:36:44.409608002 +0000 UTC m=+0.470561138 container remove c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:36:44 np0005591760 systemd[1]: libpod-conmon-c2c060f3bb2dced571d66f07db9f8018ac15a4fda03c05651ea42d9275b24553.scope: Deactivated successfully.
Jan 22 04:36:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:36:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:44.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:36:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:44 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4005e60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:44 np0005591760 podman[107625]: 2026-01-22 09:36:44.930238376 +0000 UTC m=+0.041336989 container create 2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mclaren, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:44 np0005591760 systemd[1]: Started libpod-conmon-2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0.scope.
Jan 22 04:36:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:44 np0005591760 podman[107625]: 2026-01-22 09:36:44.998526913 +0000 UTC m=+0.109625546 container init 2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:45 np0005591760 podman[107625]: 2026-01-22 09:36:45.004851875 +0000 UTC m=+0.115950488 container start 2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:45 np0005591760 podman[107625]: 2026-01-22 09:36:45.006234633 +0000 UTC m=+0.117333256 container attach 2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:36:45 np0005591760 podman[107625]: 2026-01-22 09:36:44.912988213 +0000 UTC m=+0.024086846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:45 np0005591760 busy_mclaren[107675]: 167 167
Jan 22 04:36:45 np0005591760 systemd[1]: libpod-2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0.scope: Deactivated successfully.
Jan 22 04:36:45 np0005591760 podman[107625]: 2026-01-22 09:36:45.010594245 +0000 UTC m=+0.121692879 container died 2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-65e1784218e8a4ff150abb08f820aa41df75fe60ce5acc0f3d9b44dc9bfb0ee7-merged.mount: Deactivated successfully.
Jan 22 04:36:45 np0005591760 podman[107625]: 2026-01-22 09:36:45.039670846 +0000 UTC m=+0.150769459 container remove 2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:36:45 np0005591760 systemd[1]: libpod-conmon-2105ba8535e21fd2a6b49e6393e1d56501dc861df2cf45439ce246da600b62b0.scope: Deactivated successfully.
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.190520497 +0000 UTC m=+0.042352342 container create 37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:36:45 np0005591760 python3.9[107719]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:36:45 np0005591760 systemd[1]: Started libpod-conmon-37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4.scope.
Jan 22 04:36:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997eab5b8172ba3f5693cb6f38b5fb2948651c5a5ee6830369d877e673bc5c36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997eab5b8172ba3f5693cb6f38b5fb2948651c5a5ee6830369d877e673bc5c36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997eab5b8172ba3f5693cb6f38b5fb2948651c5a5ee6830369d877e673bc5c36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997eab5b8172ba3f5693cb6f38b5fb2948651c5a5ee6830369d877e673bc5c36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.170007646 +0000 UTC m=+0.021839501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.271175472 +0000 UTC m=+0.123007327 container init 37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.278972992 +0000 UTC m=+0.130804827 container start 37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.280813035 +0000 UTC m=+0.132644880 container attach 37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 04:36:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:36:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:45.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]: {
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:    "0": [
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:        {
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "devices": [
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "/dev/loop3"
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            ],
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "lv_name": "ceph_lv0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "lv_size": "21470642176",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "name": "ceph_lv0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "tags": {
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.cluster_name": "ceph",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.crush_device_class": "",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.encrypted": "0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.osd_id": "0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.type": "block",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.vdo": "0",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:                "ceph.with_tpm": "0"
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            },
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "type": "block",
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:            "vg_name": "ceph_vg0"
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:        }
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]:    ]
Jan 22 04:36:45 np0005591760 sleepy_carver[107740]: }
Jan 22 04:36:45 np0005591760 systemd[1]: libpod-37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4.scope: Deactivated successfully.
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.562356461 +0000 UTC m=+0.414188296 container died 37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:36:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-997eab5b8172ba3f5693cb6f38b5fb2948651c5a5ee6830369d877e673bc5c36-merged.mount: Deactivated successfully.
Jan 22 04:36:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:45 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:45 np0005591760 podman[107727]: 2026-01-22 09:36:45.597705345 +0000 UTC m=+0.449537179 container remove 37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:36:45 np0005591760 systemd[1]: libpod-conmon-37a54d72bfd00fd278cbb63b5aa1bff926c4fc95d0c02fffff0bd7bcbe04b6d4.scope: Deactivated successfully.
Jan 22 04:36:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v32: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:45 np0005591760 python3.9[107931]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.137059243 +0000 UTC m=+0.037752896 container create 26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 04:36:46 np0005591760 systemd[1]: Started libpod-conmon-26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da.scope.
Jan 22 04:36:46 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.206025241 +0000 UTC m=+0.106718905 container init 26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.214126827 +0000 UTC m=+0.114820480 container start 26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ishizaka, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.121841165 +0000 UTC m=+0.022534838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.217732652 +0000 UTC m=+0.118426306 container attach 26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:46 np0005591760 gallant_ishizaka[108006]: 167 167
Jan 22 04:36:46 np0005591760 systemd[1]: libpod-26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da.scope: Deactivated successfully.
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.223168763 +0000 UTC m=+0.123862426 container died 26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:36:46 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8facad90f0acb72c41b91523389827deb9f6dc2fd740a80377c27963401a7d53-merged.mount: Deactivated successfully.
Jan 22 04:36:46 np0005591760 podman[107993]: 2026-01-22 09:36:46.248356502 +0000 UTC m=+0.149050145 container remove 26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:46 np0005591760 systemd[1]: libpod-conmon-26b32fddf4cd646a770707682a826a1a5062f18cda32a5329f00809e9331a1da.scope: Deactivated successfully.
Jan 22 04:36:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:46 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4006780 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:46 np0005591760 podman[108052]: 2026-01-22 09:36:46.383452902 +0000 UTC m=+0.037186443 container create b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:36:46 np0005591760 systemd[1]: Started libpod-conmon-b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c.scope.
Jan 22 04:36:46 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:36:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8bd9b76b22e34955c5d12c08346592ae52a217abcf2df610fde4c79a6c12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8bd9b76b22e34955c5d12c08346592ae52a217abcf2df610fde4c79a6c12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8bd9b76b22e34955c5d12c08346592ae52a217abcf2df610fde4c79a6c12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8bd9b76b22e34955c5d12c08346592ae52a217abcf2df610fde4c79a6c12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:36:46 np0005591760 podman[108052]: 2026-01-22 09:36:46.460942987 +0000 UTC m=+0.114676528 container init b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:36:46 np0005591760 podman[108052]: 2026-01-22 09:36:46.368625465 +0000 UTC m=+0.022359026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:36:46 np0005591760 podman[108052]: 2026-01-22 09:36:46.46837777 +0000 UTC m=+0.122111311 container start b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:36:46 np0005591760 podman[108052]: 2026-01-22 09:36:46.46989925 +0000 UTC m=+0.123632792 container attach b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:36:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:36:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:46.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:36:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:46 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:47 np0005591760 lvm[108268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:36:47 np0005591760 lvm[108268]: VG ceph_vg0 finished
Jan 22 04:36:47 np0005591760 exciting_pasteur[108065]: {}
Jan 22 04:36:47 np0005591760 podman[108052]: 2026-01-22 09:36:47.123683708 +0000 UTC m=+0.777417250 container died b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pasteur, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:36:47 np0005591760 systemd[1]: libpod-b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c.scope: Deactivated successfully.
Jan 22 04:36:47 np0005591760 systemd[1]: libpod-b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c.scope: Consumed 1.078s CPU time.
Jan 22 04:36:47 np0005591760 systemd[1]: var-lib-containers-storage-overlay-68b8bd9b76b22e34955c5d12c08346592ae52a217abcf2df610fde4c79a6c12a-merged.mount: Deactivated successfully.
Jan 22 04:36:47 np0005591760 python3.9[108236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:36:47 np0005591760 podman[108052]: 2026-01-22 09:36:47.156642368 +0000 UTC m=+0.810375908 container remove b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pasteur, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:36:47 np0005591760 systemd[1]: libpod-conmon-b438819be4ce6f3aa838b3c58059ec45d71ab73fddc11bf870a41e37f22eae2c.scope: Deactivated successfully.
Jan 22 04:36:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:36:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:36:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:47 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:47 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:36:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:47.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:47 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4006780 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:47] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:47] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 22 04:36:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v33: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:36:48 np0005591760 python3.9[108462]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:36:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:48 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:36:48.434Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003057201s
Jan 22 04:36:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:48.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:48 np0005591760 python3.9[108547]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:36:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:48 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4006780 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/093649 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:36:49
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', '.nfs', 'volumes', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 373f537a-c4a0-401f-808c-f0845c8979b0 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:49.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:36:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:49 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v35: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 22 04:36:50 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 74edf7d2-7779-4f8f-a3e1-dbf113d7140f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:50 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4006780 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:50.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:50 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 22 04:36:51 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 0fd23765-10fa-42a4-b8d8-e3d68722b90e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:51 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4007490 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v38: 43 pgs: 31 unknown, 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 22 04:36:52 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=47 pruub=12.572689056s) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active pruub 215.145645142s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 22 04:36:52 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=47 pruub=12.572689056s) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown pruub 215.145645142s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:52 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 2f32aebe-2fd0-4a61-acf7-6d7e2aa006f4 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 22 04:36:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 04:36:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:52 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:52.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:52 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4007490 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1d( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1e( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1f( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.19( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.6( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.3( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.c( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.b( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.15( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.16( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.17( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.0( empty local-lis/les=47/48 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.16( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.17( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [0] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:53 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 2e67feb8-f576-4826-8a49-b5880a3e88ad (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:53 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4007490 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 22 04:36:53 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 22 04:36:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v41: 105 pgs: 93 unknown, 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mgr[74522]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 22 04:36:54 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 98db6ba5-f82a-430d-a1c7-7d69e13decb6 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:54 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4007490 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:36:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:36:54 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 22 04:36:54 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 22 04:36:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:54 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 22 04:36:55 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev b7dffd06-9559-49d5-82f9-e3fce786ce07 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:55 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 22 04:36:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v44: 151 pgs: 1 peering, 46 unknown, 104 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 49 pg[6.0( v 43'42 (0'0,43'42] local-lis/les=17/18 n=22 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=49 pruub=11.109925270s) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 43'41 mlcod 43'41 active pruub 217.158569336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.0( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=49 pruub=11.109925270s) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 43'41 mlcod 0'0 unknown pruub 217.158569336s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.2( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.3( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.4( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.5( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.7( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.1( v 43'42 (0'0,43'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.8( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.a( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.c( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.d( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 50 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 22 04:36:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:56 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 22 04:36:56 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 00ce77ea-ad3a-4584-a9f4-037f26466088 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[8.0( v 29'6 (0'0,29'6] local-lis/les=28/29 n=6 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=51 pruub=9.199487686s) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 29'5 mlcod 29'5 active pruub 215.821929932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.c( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.2( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.0( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 43'41 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.3( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.1( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.7( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[8.0( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=51 pruub=9.199487686s) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 29'5 mlcod 0'0 unknown pruub 215.821929932s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.4( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.5( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.a( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 51 pg[6.d( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [0] r=0 lpr=49 pi=[17,49)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5581dbf34900) operator()   moving buffer(0x5581dc920d48 space 0x5581dc806f80 0x0~1000 clean)
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5581dbf34900) operator()   moving buffer(0x5581dc953ba8 space 0x5581dc80fbb0 0x0~1000 clean)
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5581dbf34900) operator()   moving buffer(0x5581dc743c48 space 0x5581dc848b70 0x0~1000 clean)
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5581dbf34900) operator()   moving buffer(0x5581dc953ce8 space 0x5581dc7945c0 0x0~1000 clean)
Jan 22 04:36:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Jan 22 04:36:56 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Jan 22 04:36:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:56 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:57 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:36:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 22 04:36:57 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 56fbeb0d-2027-4a62-8183-2204e9317944 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.14( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1b( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.19( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.18( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1f( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1e( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1d( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1a( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1c( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.2( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.7( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.6( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.5( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.c( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.e( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.3( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.d( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.f( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.9( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.a( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.b( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.8( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.4( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.15( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.17( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.10( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1( v 29'6 (0'0,29'6] local-lis/les=28/29 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.11( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.12( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.13( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.16( v 29'6 lc 0'0 (0'0,29'6] local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1d( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1e( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.7( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.19( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.6( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.5( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.0( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 29'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.3( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.e( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1a( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.f( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.a( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.8( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.13( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.17( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.16( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.1( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 52 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [0] r=0 lpr=51 pi=[28,51)/1 crt=29'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:57] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Jan 22 04:36:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:36:57] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Jan 22 04:36:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:57 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 22 04:36:57 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 22 04:36:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v47: 213 pgs: 1 peering, 108 unknown, 104 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:36:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:58 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 22 04:36:58 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev 03fa21db-c3d6-459f-8ac7-cfb26b749dbf (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Jan 22 04:36:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:36:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:58 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 22 04:36:58 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 22 04:36:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:58 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 53 pg[9.0( v 43'1161 (0'0,43'1161] local-lis/les=30/31 n=178 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=53 pruub=8.314811707s) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 43'1160 mlcod 43'1160 active pruub 217.845428467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 53 pg[9.0( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=53 pruub=8.314811707s) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 43'1160 mlcod 0'0 unknown pruub 217.845428467s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc905748 space 0x5581dc70ba10 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91cb68 space 0x5581dc393530 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91d608 space 0x5581dc393050 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91cc08 space 0x5581dc6abc80 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91cf28 space 0x5581dc8776d0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc934de8 space 0x5581dc9a1050 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc9059c8 space 0x5581dbe7be20 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc90d068 space 0x5581dc9176d0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91ca28 space 0x5581dc85d940 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91c168 space 0x5581dc73d6d0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91cca8 space 0x5581dc6f09d0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc9040c8 space 0x5581dbe7ab70 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93a8e8 space 0x5581dc876760 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93a3e8 space 0x5581dc876eb0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc904e88 space 0x5581dbe7bd50 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91d108 space 0x5581dc6aade0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc934fc8 space 0x5581dc9a20e0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93afc8 space 0x5581dbe892c0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93af28 space 0x5581dc6ac5c0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91c0c8 space 0x5581dc6ac420 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91dd88 space 0x5581dc7309d0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc70f568 space 0x5581dc8fc420 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91d568 space 0x5581dc731a10 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc934208 space 0x5581dc99c690 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc735a68 space 0x5581dc783050 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93b9c8 space 0x5581dc6f0d10 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93be28 space 0x5581dc9165c0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc91c5c8 space 0x5581dc6aa0e0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93aac8 space 0x5581dc917d50 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc905248 space 0x5581dc7311f0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5581dbbf5680) operator()   moving buffer(0x5581dc93b428 space 0x5581dc9172c0 0x0~1000 clean)
Jan 22 04:36:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:36:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:36:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:36:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.15( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1b( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.18( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.19( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1e( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1c( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1f( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1d( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1a( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.3( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.6( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.7( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.4( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.d( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.f( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.2( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.c( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.e( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.9( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.b( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.8( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.a( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.5( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.17( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.16( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.11( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.10( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.14( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.13( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.12( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=30/31 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] update: starting ev e1b435fd-e058-4db5-b28d-69a40001a49b (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 373f537a-c4a0-401f-808c-f0845c8979b0 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 373f537a-c4a0-401f-808c-f0845c8979b0 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 74edf7d2-7779-4f8f-a3e1-dbf113d7140f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 74edf7d2-7779-4f8f-a3e1-dbf113d7140f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 0fd23765-10fa-42a4-b8d8-e3d68722b90e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 0fd23765-10fa-42a4-b8d8-e3d68722b90e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 2f32aebe-2fd0-4a61-acf7-6d7e2aa006f4 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 2f32aebe-2fd0-4a61-acf7-6d7e2aa006f4 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 2e67feb8-f576-4826-8a49-b5880a3e88ad (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 2e67feb8-f576-4826-8a49-b5880a3e88ad (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 98db6ba5-f82a-430d-a1c7-7d69e13decb6 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 98db6ba5-f82a-430d-a1c7-7d69e13decb6 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev b7dffd06-9559-49d5-82f9-e3fce786ce07 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event b7dffd06-9559-49d5-82f9-e3fce786ce07 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 00ce77ea-ad3a-4584-a9f4-037f26466088 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 00ce77ea-ad3a-4584-a9f4-037f26466088 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 56fbeb0d-2027-4a62-8183-2204e9317944 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 56fbeb0d-2027-4a62-8183-2204e9317944 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1c( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.4( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.2( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.c( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.1( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.0( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 43'1160 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.5( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.14( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 54 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=30/30 les/c/f=31/31/0 sis=53) [0] r=0 lpr=53 pi=[30,53)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev 03fa21db-c3d6-459f-8ac7-cfb26b749dbf (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event 03fa21db-c3d6-459f-8ac7-cfb26b749dbf (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] complete: finished ev e1b435fd-e058-4db5-b28d-69a40001a49b (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event e1b435fd-e058-4db5-b28d-69a40001a49b (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Jan 22 04:36:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:36:59 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 22 04:36:59 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 22 04:36:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v50: 275 pgs: 1 peering, 170 unknown, 104 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 22 04:36:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:00 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:37:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:00 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:37:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:00 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 22 04:37:00 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 55 pg[11.0( v 43'2 (0'0,43'2] local-lis/les=34/35 n=2 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=55 pruub=10.504526138s) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 43'1 mlcod 43'1 active pruub 221.163436890s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 22 04:37:00 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 55 pg[11.0( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=55 pruub=10.504526138s) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 43'1 mlcod 0'0 unknown pruub 221.163436890s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:00 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:00.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:00 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 22 04:37:00 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 22 04:37:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:00 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 22 04:37:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:37:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 04:37:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 22 04:37:01 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.13( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.16( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.7( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.5( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.13( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.16( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.7( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.c( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.a( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.0( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 43'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.5( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1d( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 56 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 22 04:37:01 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 22 04:37:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:01 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 62 unknown, 275 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 7 op/s
Jan 22 04:37:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:02 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:02 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 22 04:37:02 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 22 04:37:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:02 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde8005c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:03.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:03 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 22 04:37:03 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 22 04:37:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:03 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 62 unknown, 275 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.9 KiB/s wr, 5 op/s
Jan 22 04:37:04 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 26 completed events
Jan 22 04:37:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:37:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:04 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:04 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:37:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:04 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 22 04:37:04 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 22 04:37:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.003000053s ======
Jan 22 04:37:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:04.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000053s
Jan 22 04:37:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:04 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:37:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:05.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:37:05 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 22 04:37:05 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 22 04:37:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:05 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:37:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.873616219s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.586868286s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.873581886s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.586868286s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.923171043s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636749268s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.12( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.923132896s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636749268s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.966711044s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680343628s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.966695786s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680343628s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.874024391s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587860107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.874008179s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587860107s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.922385216s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636383057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.13( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.966341972s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680358887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.11( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.922369957s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636383057s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.873723030s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587936401s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.873709679s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587936401s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.13( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.966328621s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680358887s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921586037s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636337280s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.10( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921571732s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636337280s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965566635s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680389404s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965549469s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680389404s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.873097420s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587966919s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.873085976s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587966919s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.17( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921441078s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636398315s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.17( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921429634s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636398315s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.872875214s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587890625s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.872859955s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587890625s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.16( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921339989s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636398315s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.16( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921329498s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636398315s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.16( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965231895s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680374146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.872793198s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587951660s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.16( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965219498s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680374146s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.872780800s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587951660s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921073914s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636337280s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.15( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921064377s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636337280s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921021461s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636322021s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.4( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.921009064s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636322021s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.872571945s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587951660s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.7( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965292931s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680770874s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.7( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965280533s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680770874s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.5( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.908518791s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624603271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919737816s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635848999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.5( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.908504486s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624603271s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.b( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919724464s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635848999s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964585304s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680831909s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964574814s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680831909s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.871755600s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588119507s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.871747017s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588119507s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.a( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919856071s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636322021s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.a( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919845581s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636322021s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.7( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.908028603s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624603271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.7( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.908018112s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624603271s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.871290207s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.587966919s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.871279716s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587966919s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.a( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963829041s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680877686s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.a( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963816643s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680877686s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.8( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919134140s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.636337280s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.8( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919122696s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.636337280s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.919074059s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635833740s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.9( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.918575287s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635833740s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.1( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.907236099s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624588013s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.1( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.907224655s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624588013s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.870368958s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588012695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.870355606s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588012695s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.f( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.918004036s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635833740s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.f( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.917993546s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635833740s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.3( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.906567574s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624496460s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.3( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.906558037s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624496460s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.870364189s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588378906s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.870354652s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588378906s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.917759895s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635848999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962721825s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.680908203s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.3( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.917369843s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635726929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.3( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.917358398s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635726929s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962707520s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.680908203s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.905665398s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624496460s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.905652046s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624496460s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.d( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.905559540s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624496460s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.d( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.905544281s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624496460s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868930817s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588134766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868918419s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588134766s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961853027s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681228638s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961841583s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681228638s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868680954s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588165283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868671417s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588165283s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868521690s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588088989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961630821s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681243896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868505478s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588088989s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.872559547s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.587951660s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.915910721s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635681152s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.c( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.915901184s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635681152s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.5( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.915836334s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635696411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.5( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.915826797s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635696411s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.904673576s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624603271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.904663086s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624603271s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868145943s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588165283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868137360s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588165283s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.6( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.915616989s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635726929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.6( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.915607452s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635726929s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.868006706s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588226318s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.867998123s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588226318s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.d( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.917752266s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635848999s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.5( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960654259s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681289673s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.5( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960643768s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681289673s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.903281212s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624359131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=13.903265953s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624359131s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960075378s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681304932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960064888s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681304932s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.866779327s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588256836s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.866765976s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588256836s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.914070129s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635589600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959734917s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681320190s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.2( v 29'6 (0'0,29'6] local-lis/les=51/52 n=1 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.914053917s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635589600s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959722519s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681320190s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.913865089s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635559082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.1c( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.913854599s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635559082s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1d( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959527016s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681350708s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1d( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959516525s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681350708s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959429741s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681320190s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959412575s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681320190s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959411621s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681350708s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959401131s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681350708s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.866272926s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588287354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.866265297s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588287354s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959251404s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681365967s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.959242821s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681365967s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.866106033s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588302612s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.866097450s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588302612s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.913121223s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635406494s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.18( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.913111687s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635406494s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.913120270s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635482788s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.958283424s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681396484s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.1f( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.912501335s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635482788s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961622238s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681243896s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.865086555s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588394165s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.865075111s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588394165s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.958270073s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681396484s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.19( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.912178993s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635665894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.19( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.912167549s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635665894s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.957406044s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681381226s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.957392693s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681381226s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.957260132s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 active pruub 227.681381226s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.957249641s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.681381226s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.864047050s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 227.588363647s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.864035606s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.588363647s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.910999298s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635360718s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.911039352s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 active pruub 231.635543823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.14( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.911027908s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635543823s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[8.1b( v 29'6 (0'0,29'6] local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=14.910982132s) [1] r=-1 lpr=57 pi=[51,57)/1 crt=29'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.635360718s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.1d( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.19( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.1e( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.18( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.17( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.12( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.14( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.a( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.6( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.6( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.1( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.2( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.5( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.4( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.17( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.3( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.7( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.c( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.b( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.1e( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[3.1f( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[5.19( empty local-lis/les=0/0 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.1e( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.10( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.12( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.6( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.8( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.4( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.a( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.c( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.e( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.6( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.b( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.e( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.9( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[2.1( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.1c( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=0/0 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 57 pg[12.19( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 22 04:37:06 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 22 04:37:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:06.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:06 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf000a7e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.19( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.18( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.1e( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.19( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.13( v 43'96 (0'0,43'96] local-lis/les=57/58 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.1e( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.18( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.12( v 54'58 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.14( v 54'99 lc 43'86 (0'0,54'99] local-lis/les=57/58 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=54'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.1f( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.1e( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.b( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.e( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.1( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.6( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.4( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.e( v 54'58 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.1( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.6( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.2( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.5( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.c( v 54'58 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.b( v 54'58 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.8( v 43'96 (0'0,43'96] local-lis/les=57/58 n=1 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.4( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.6( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.2( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.9( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.e( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.c( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.1d( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.2( v 43'96 (0'0,43'96] local-lis/les=57/58 n=1 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.f( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.b( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.a( v 54'58 lc 0'0 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.15( v 54'99 lc 43'78 (0'0,54'99] local-lis/les=57/58 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=54'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.4( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.3( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.8( v 54'58 (0'0,54'58] local-lis/les=57/58 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.3( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.6( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.7( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.9( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.a( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.5( v 43'96 (0'0,43'96] local-lis/les=57/58 n=1 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.6( v 54'58 lc 43'41 (0'0,54'58] local-lis/les=57/58 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.8( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.13( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.17( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.19( v 54'58 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.1c( v 54'58 (0'0,54'58] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=54'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.1b( v 43'96 (0'0,43'96] local-lis/les=57/58 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.12( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.14( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.18( v 43'96 (0'0,43'96] local-lis/les=57/58 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.17( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[10.19( v 43'96 (0'0,43'96] local-lis/les=57/58 n=0 ec=53/32 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.1b( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[2.1e( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[7.10( empty local-lis/les=57/58 n=0 ec=51/18 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[12.10( v 56'61 lc 43'27 (0'0,56'61] local-lis/les=57/58 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'61 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[5.19( empty local-lis/les=57/58 n=0 ec=49/16 lis/c=49/49 les/c/f=50/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 58 pg[3.1f( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:07] "GET /metrics HTTP/1.1" 200 48354 "" "Prometheus/2.51.0"
Jan 22 04:37:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:07] "GET /metrics HTTP/1.1" 200 48354 "" "Prometheus/2.51.0"
Jan 22 04:37:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:07 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v58: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 22 04:37:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:08 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efe000039c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 22 04:37:08 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.a( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.885145187s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624801636s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.a( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.885115623s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624801636s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.884215355s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624588013s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.884177208s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624588013s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.883696556s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624435425s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.883676529s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624435425s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.2( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.883333206s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 230.624374390s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 59 pg[6.2( v 43'42 (0'0,43'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=11.883319855s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.624374390s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:08.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:08 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:08 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 22 04:37:09 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 22 04:37:09 np0005591760 ceph-mgr[74522]: [progress INFO root] Completed event b8ea45d4-41fb-4ebf-b7f6-fe04acf09dc8 (Global Recovery Event) in 15 seconds
Jan 22 04:37:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:37:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:09.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 22 04:37:09 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.a scrub starts
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 22 04:37:09 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.a scrub ok
Jan 22 04:37:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:09 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 22 04:37:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 04:37:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:10 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.897531509s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.658950806s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.897507668s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.658950806s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.895302773s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.657226562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.895288467s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.657226562s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.895048141s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.657135010s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.895037651s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.657135010s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.894749641s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.657073975s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.894739151s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.657073975s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.896514893s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.658935547s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.896504402s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.658935547s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 04:37:10 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.896634102s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.659133911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.896626472s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.659133911s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.892972946s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.657241821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.892885208s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.657241821s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.890979767s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.656005859s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:10 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 61 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=12.890889168s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.656005859s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:10 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:11.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:11 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 62 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 04:37:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:11 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efdf000a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 600 B/s, 3 keys/s, 6 objects/s recovering
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 22 04:37:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 04:37:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:12 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 63 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[53,62)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 04:37:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 04:37:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:12 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.391211510s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.776870728s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.390359879s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.776870728s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.391056061s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.777801514s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.b( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.391015053s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.777801514s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.388962746s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.775848389s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.17( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.388939857s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.775848389s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.388884544s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.775894165s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.13( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.388855934s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.775894165s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.388629913s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.775802612s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.7( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.388591766s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.775802612s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.386621475s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.774429321s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.387823105s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.775726318s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=5 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.387806892s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.775726318s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.384102821s) [2] async=[2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 238.774734497s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.3( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.383934021s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.774429321s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 64 pg[9.f( v 43'1161 (0'0,43'1161] local-lis/les=62/63 n=6 ec=53/30 lis/c=62/53 les/c/f=63/54/0 sis=64 pruub=15.383891106s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.774734497s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:13.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 22 04:37:13 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 22 04:37:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:13 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efe000044e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 600 B/s, 3 keys/s, 6 objects/s recovering
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 22 04:37:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.5( v 54'1164 (0'0,54'1164] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.270456314s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=54'1162 lcod 54'1163 mlcod 54'1163 active pruub 233.658950806s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.5( v 54'1164 (0'0,54'1164] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.270410538s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=54'1162 lcod 54'1163 mlcod 0'0 unknown NOTIFY pruub 233.658950806s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.268383980s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.657058716s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.268371582s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.657058716s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.266169548s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.655960083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.266074181s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.655960083s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.266985893s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 233.657241821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 65 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=65 pruub=9.266892433s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.657241821s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 22 04:37:14 np0005591760 ceph-mgr[74522]: [progress INFO root] Writing back 27 completed events
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:14 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 22 04:37:14 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 04:37:14 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:14 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 22 04:37:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 22 04:37:15 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.5( v 54'1164 (0'0,54'1164] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=54'1162 lcod 54'1163 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.5( v 54'1164 (0'0,54'1164] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=54'1162 lcod 54'1163 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 66 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:15.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 22 04:37:15 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 22 04:37:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:15 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 4 remapped+peering, 1 active+recovery_wait+degraded, 1 active+recovering, 331 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2/230 objects degraded (0.870%); 1/230 objects misplaced (0.435%); 200 B/s, 1 keys/s, 10 objects/s recovering
Jan 22 04:37:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 22 04:37:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 22 04:37:16 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 22 04:37:16 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 67 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:16 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 67 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:16 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 67 pg[9.5( v 54'1164 (0'0,54'1164] local-lis/les=66/67 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=54'1164 lcod 54'1163 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:16 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 67 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:16 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efe00004e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:16 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 22 04:37:16 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 22 04:37:16 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/230 objects degraded (0.870%), 1 pg degraded (PG_DEGRADED)
Jan 22 04:37:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:16.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:16 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/093717 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:37:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 22 04:37:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 22 04:37:17 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996648788s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 242.400588989s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997281075s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active pruub 242.401596069s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996450424s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 242.401687622s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996104240s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 242.401489258s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Jan 22 04:37:17 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Jan 22 04:37:17 np0005591760 ceph-mon[74254]: Health check failed: Degraded data redundancy: 2/230 objects degraded (0.870%), 1 pg degraded (PG_DEGRADED)
Jan 22 04:37:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:17] "GET /metrics HTTP/1.1" 200 48354 "" "Prometheus/2.51.0"
Jan 22 04:37:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:17] "GET /metrics HTTP/1.1" 200 48354 "" "Prometheus/2.51.0"
Jan 22 04:37:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:17 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 4 remapped+peering, 1 active+recovery_wait+degraded, 1 active+recovering, 331 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2/230 objects degraded (0.870%); 1/230 objects misplaced (0.435%); 200 B/s, 1 keys/s, 10 objects/s recovering
Jan 22 04:37:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 22 04:37:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 22 04:37:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 22 04:37:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:18 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:18 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 22 04:37:18 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 22 04:37:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:18 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efe00004e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 22 04:37:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7ff3678a0d00>)]
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 22 04:37:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:19.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7ff3678a0cd0>)]
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 22 04:37:19 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 22 04:37:19 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 22 04:37:19 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:19 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 4 remapped+peering, 1 active+recovery_wait+degraded, 1 active+recovering, 331 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2/230 objects degraded (0.870%); 1/230 objects misplaced (0.435%); 172 B/s, 1 keys/s, 8 objects/s recovering
Jan 22 04:37:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:20 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:20 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 22 04:37:20 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 22 04:37:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:20 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde80029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:21.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:21 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 22 04:37:21 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 22 04:37:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:21 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efe00004e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:21 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.rfmoog(active, since 92s), standbys: compute-2.bisona, compute-1.upcmhd
Jan 22 04:37:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 459 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 3 objects/s recovering
Jan 22 04:37:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 22 04:37:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 04:37:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 22 04:37:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 04:37:22 np0005591760 kernel: ganesha.nfsd[101102]: segfault at 50 ip 00007efe7301632e sp 00007efdfa7fb210 error 4 in libntirpc.so.5.8[7efe72ffb000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 22 04:37:22 np0005591760 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 22 04:37:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[100849]: 22/01/2026 09:37:22 : epoch 6971ef85 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efde4008590 fd 48 proxy ignored for local
Jan 22 04:37:22 np0005591760 systemd[1]: Created slice Slice /system/systemd-coredump.
Jan 22 04:37:22 np0005591760 systemd[1]: Started Process Core Dump (PID 108785/UID 0).
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 22 04:37:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/230 objects degraded (0.870%), 1 pg degraded)
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799471855s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.659103394s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797591209s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.657760620s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796589851s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.657379150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.795056343s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.656280518s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:22 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 04:37:22 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:23.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 22 04:37:23 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/230 objects degraded (0.870%), 1 pg degraded)
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: Cluster is now healthy
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 04:37:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 459 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 3 objects/s recovering
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 22 04:37:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 04:37:24 np0005591760 systemd-coredump[108786]: Process 100853 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 42:#012#0  0x00007efe7301632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007efe73020900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 22 04:37:24 np0005591760 systemd[1]: systemd-coredump@0-108785-0.service: Deactivated successfully.
Jan 22 04:37:24 np0005591760 systemd[1]: systemd-coredump@0-108785-0.service: Consumed 1.661s CPU time.
Jan 22 04:37:24 np0005591760 podman[108793]: 2026-01-22 09:37:24.257504304 +0000 UTC m=+0.026659346 container died 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:37:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f7a3dfd55375c26d678c15bed6e34600103eea7ae9c9253e5dc4640215c40f62-merged.mount: Deactivated successfully.
Jan 22 04:37:24 np0005591760 podman[108793]: 2026-01-22 09:37:24.292034728 +0000 UTC m=+0.061189771 container remove 031496c0ee52749d6ca2497660dbe05b1127b23323048c8cd033c2e060138ea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:37:24 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Main process exited, code=exited, status=139/n/a
Jan 22 04:37:24 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 22 04:37:24 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 22 04:37:24 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Failed with result 'exit-code'.
Jan 22 04:37:24 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.562s CPU time.
Jan 22 04:37:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 04:37:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 04:37:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947658539s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.834869385s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948608398s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.836120605s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948017120s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.835922241s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948321342s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.836303711s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 4 active+remapped, 333 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 22 04:37:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 22 04:37:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:37:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 22 04:37:26 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759959221s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657836914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757369995s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.656463623s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729435921s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 246.628738403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:27 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 22 04:37:27 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 22 04:37:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:27.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:27] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 22 04:37:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:27] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 22 04:37:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 4 active+remapped, 333 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 22 04:37:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Jan 22 04:37:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/093728 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:37:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:37:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:28 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726813316s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657867432s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724657059s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.656433105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 22 04:37:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000308990s) [2] async=[2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 254.934616089s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001207352s) [2] async=[2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 254.935806274s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:29 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 4 active+remapped, 333 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 22 04:37:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 22 04:37:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000017s ======
Jan 22 04:37:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:30.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 22 04:37:30 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708597183s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657836914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706829071s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657592773s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:30 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 22 04:37:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000018s ======
Jan 22 04:37:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:31.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Jan 22 04:37:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 22 04:37:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998394966s) [2] async=[2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 256.954620361s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998961449s) [2] async=[2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 256.955718994s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:31 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:31 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 22 04:37:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 04:37:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 04:37:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 458 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 04:37:32 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 22 04:37:32 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 22 04:37:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:37:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:32.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:37:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 22 04:37:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 22 04:37:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 22 04:37:32 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:32 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 22 04:37:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 22 04:37:33 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 22 04:37:33 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.757055283s) [1] async=[1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 259.152893066s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:33 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756841660s) [1] async=[1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 259.152709961s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:33 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:33 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:33 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 22 04:37:33 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 22 04:37:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v94: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 458 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 04:37:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 22 04:37:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 22 04:37:34 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 22 04:37:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 22 04:37:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:34 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 22 04:37:34 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 22 04:37:34 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Scheduled restart job, restart counter is at 1.
Jan 22 04:37:34 np0005591760 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:37:34 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.562s CPU time.
Jan 22 04:37:34 np0005591760 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:37:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:34.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:34 np0005591760 podman[108924]: 2026-01-22 09:37:34.67995761 +0000 UTC m=+0.033728629 container create d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:37:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94927b5ced4e39e638c291cc4aeb409319ffe20f106322321371fbd76b0b52/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94927b5ced4e39e638c291cc4aeb409319ffe20f106322321371fbd76b0b52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94927b5ced4e39e638c291cc4aeb409319ffe20f106322321371fbd76b0b52/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94927b5ced4e39e638c291cc4aeb409319ffe20f106322321371fbd76b0b52/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ylzmiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:34 np0005591760 podman[108924]: 2026-01-22 09:37:34.732329585 +0000 UTC m=+0.086100604 container init d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:37:34 np0005591760 podman[108924]: 2026-01-22 09:37:34.738121183 +0000 UTC m=+0.091892192 container start d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:37:34 np0005591760 bash[108924]: d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815
Jan 22 04:37:34 np0005591760 podman[108924]: 2026-01-22 09:37:34.667025037 +0000 UTC m=+0.020796066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:34 np0005591760 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:37:34 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 22 04:37:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:37:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:35 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 22 04:37:35 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 22 04:37:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 337 active+clean; 458 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 134 B/s, 6 objects/s recovering
Jan 22 04:37:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 22 04:37:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 04:37:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 22 04:37:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 04:37:36 np0005591760 python3.9[109106]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:37:36 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 22 04:37:36 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 22 04:37:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 22 04:37:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 22 04:37:36 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Jan 22 04:37:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Jan 22 04:37:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:37 np0005591760 python3.9[109394]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 04:37:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:37] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 22 04:37:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:37] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 22 04:37:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 04:37:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v99: 337 pgs: 337 active+clean; 458 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 118 B/s, 5 objects/s recovering
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 22 04:37:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:38 np0005591760 python3.9[109547]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 04:37:38 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 22 04:37:38 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 22 04:37:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:38.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:38 np0005591760 python3.9[109699]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 22 04:37:38 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 22 04:37:39 np0005591760 python3.9[109852]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 04:37:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:39 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 22 04:37:39 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 22 04:37:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 04:37:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 04:37:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 337 active+clean; 458 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 97 B/s, 4 objects/s recovering
Jan 22 04:37:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 22 04:37:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 04:37:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 22 04:37:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 04:37:40 np0005591760 python3.9[110005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:37:40 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 22 04:37:40 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 22 04:37:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 22 04:37:40 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 22 04:37:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:37:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:37:40 np0005591760 python3.9[110158]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:37:41 np0005591760 python3.9[110236]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:37:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:41 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 22 04:37:41 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 04:37:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 609 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 22 04:37:41 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 22 04:37:42 np0005591760 python3.9[110389]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:37:42 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 22 04:37:42 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 22 04:37:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:37:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 22 04:37:42 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 22 04:37:43 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997930527s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 43'42 active pruub 265.397277832s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:43 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 22 04:37:43 np0005591760 python3.9[110544]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 04:37:43 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 22 04:37:43 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 22 04:37:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:43 np0005591760 python3.9[110697]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 04:37:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 04:37:43 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 22 04:37:44 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:44 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Jan 22 04:37:44 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Jan 22 04:37:44 np0005591760 python3.9[110851]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 04:37:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:37:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:44.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 04:37:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 04:37:45 np0005591760 python3.9[111004]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 04:37:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 22 04:37:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 22 04:37:45 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:45 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 22 04:37:45 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Jan 22 04:37:45 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Jan 22 04:37:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 B/s wr, 0 op/s; 2/232 objects misplaced (0.862%); 82 B/s, 3 objects/s recovering
Jan 22 04:37:45 np0005591760 python3.9[111156]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:37:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 22 04:37:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 22 04:37:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 22 04:37:46 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 22 04:37:46 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 22 04:37:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 22 04:37:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:46 np0005591760 systemd[75491]: Created slice User Background Tasks Slice.
Jan 22 04:37:47 np0005591760 systemd[75491]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 04:37:47 np0005591760 systemd[75491]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 04:37:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 22 04:37:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 22 04:37:47 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 22 04:37:47 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Jan 22 04:37:47 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Jan 22 04:37:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:37:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:37:47 np0005591760 python3.9[111328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:37:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:47] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 22 04:37:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:47] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 22 04:37:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624cf4d04d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 2/232 objects misplaced (0.862%); 82 B/s, 3 objects/s recovering
Jan 22 04:37:48 np0005591760 python3.9[111559]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:37:48 np0005591760 podman[111588]: 2026-01-22 09:37:48.061069519 +0000 UTC m=+0.059914356 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:37:48 np0005591760 podman[111588]: 2026-01-22 09:37:48.14889777 +0000 UTC m=+0.147742586 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:37:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 22 04:37:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 22 04:37:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 22 04:37:48 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 22 04:37:48 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 22 04:37:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:48 np0005591760 python3.9[111706]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:37:48 np0005591760 podman[111785]: 2026-01-22 09:37:48.607999001 +0000 UTC m=+0.051463734 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:37:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:37:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:48.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:37:48 np0005591760 podman[111785]: 2026-01-22 09:37:48.647306848 +0000 UTC m=+0.090771570 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:37:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:49 np0005591760 podman[111997]: 2026-01-22 09:37:49.007204053 +0000 UTC m=+0.048932813 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:37:49 np0005591760 podman[111997]: 2026-01-22 09:37:49.058578096 +0000 UTC m=+0.100306838 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:37:49 np0005591760 python3.9[111986]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:37:49
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Some PGs (0.005935) are inactive; try again later
Jan 22 04:37:49 np0005591760 podman[112085]: 2026-01-22 09:37:49.278349616 +0000 UTC m=+0.055126319 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:37:49 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Jan 22 04:37:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:49.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:49 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Jan 22 04:37:49 np0005591760 podman[112085]: 2026-01-22 09:37:49.457019367 +0000 UTC m=+0.233796051 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:37:49 np0005591760 python3.9[112176]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:37:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:49 np0005591760 podman[112240]: 2026-01-22 09:37:49.665055242 +0000 UTC m=+0.048603912 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:37:49 np0005591760 podman[112240]: 2026-01-22 09:37:49.678018882 +0000 UTC m=+0.061567553 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:37:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 220 B/s rd, 0 B/s wr, 0 op/s; 2/232 objects misplaced (0.862%); 71 B/s, 3 objects/s recovering
Jan 22 04:37:49 np0005591760 podman[112291]: 2026-01-22 09:37:49.884010052 +0000 UTC m=+0.051026761 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 22 04:37:49 np0005591760 podman[112291]: 2026-01-22 09:37:49.900026568 +0000 UTC m=+0.067043256 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Jan 22 04:37:50 np0005591760 podman[112440]: 2026-01-22 09:37:50.127072231 +0000 UTC m=+0.050593173 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:37:50 np0005591760 podman[112440]: 2026-01-22 09:37:50.185842209 +0000 UTC m=+0.109363141 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:37:50 np0005591760 podman[112516]: 2026-01-22 09:37:50.351541197 +0000 UTC m=+0.047532233 container exec d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:37:50 np0005591760 podman[112516]: 2026-01-22 09:37:50.363074461 +0000 UTC m=+0.059065476 container exec_died d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:37:50 np0005591760 python3.9[112490]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:37:50 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.e scrub starts
Jan 22 04:37:50 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.e scrub ok
Jan 22 04:37:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/093750 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:37:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5b260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:37:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:37:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:37:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:50.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:37:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb034001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:37:51 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 22 04:37:51 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 22 04:37:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:37:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:37:51 np0005591760 podman[112755]: 2026-01-22 09:37:51.632477383 +0000 UTC m=+0.037298649 container create 8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:37:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:51 np0005591760 systemd[1]: Started libpod-conmon-8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff.scope.
Jan 22 04:37:51 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:37:51 np0005591760 podman[112755]: 2026-01-22 09:37:51.703525198 +0000 UTC m=+0.108346474 container init 8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nobel, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:37:51 np0005591760 podman[112755]: 2026-01-22 09:37:51.710848705 +0000 UTC m=+0.115669971 container start 8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nobel, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:37:51 np0005591760 podman[112755]: 2026-01-22 09:37:51.712317544 +0000 UTC m=+0.117138810 container attach 8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:37:51 np0005591760 podman[112755]: 2026-01-22 09:37:51.61792077 +0000 UTC m=+0.022742056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:51 np0005591760 fervent_nobel[112769]: 167 167
Jan 22 04:37:51 np0005591760 systemd[1]: libpod-8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff.scope: Deactivated successfully.
Jan 22 04:37:51 np0005591760 conmon[112769]: conmon 8d127f5081a197ed78bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff.scope/container/memory.events
Jan 22 04:37:51 np0005591760 podman[112774]: 2026-01-22 09:37:51.761810231 +0000 UTC m=+0.025358138 container died 8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nobel, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:37:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay-aa05b61209f36609d7d064f9b46fab51c4c25ad3b307da826ced659cff41b501-merged.mount: Deactivated successfully.
Jan 22 04:37:51 np0005591760 podman[112774]: 2026-01-22 09:37:51.783422216 +0000 UTC m=+0.046970123 container remove 8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:37:51 np0005591760 systemd[1]: libpod-conmon-8d127f5081a197ed78bceaa477f4b7da8ad54f91eeea33163512def5edbb60ff.scope: Deactivated successfully.
Jan 22 04:37:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 2 objects/s recovering
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 22 04:37:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 04:37:51 np0005591760 podman[112834]: 2026-01-22 09:37:51.943346166 +0000 UTC m=+0.043691751 container create 31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 04:37:51 np0005591760 systemd[1]: Started libpod-conmon-31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1.scope.
Jan 22 04:37:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:37:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1799cfc786c57a0807a55d7a089fef1b5a5ec7ecb172e3321fd2ed0df744c06e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1799cfc786c57a0807a55d7a089fef1b5a5ec7ecb172e3321fd2ed0df744c06e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1799cfc786c57a0807a55d7a089fef1b5a5ec7ecb172e3321fd2ed0df744c06e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1799cfc786c57a0807a55d7a089fef1b5a5ec7ecb172e3321fd2ed0df744c06e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1799cfc786c57a0807a55d7a089fef1b5a5ec7ecb172e3321fd2ed0df744c06e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:52 np0005591760 podman[112834]: 2026-01-22 09:37:52.021040451 +0000 UTC m=+0.121386026 container init 31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:37:52 np0005591760 podman[112834]: 2026-01-22 09:37:51.927768028 +0000 UTC m=+0.028113622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:52 np0005591760 podman[112834]: 2026-01-22 09:37:52.030988435 +0000 UTC m=+0.131334011 container start 31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:37:52 np0005591760 podman[112834]: 2026-01-22 09:37:52.032486681 +0000 UTC m=+0.132832255 container attach 31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:37:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 22 04:37:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 04:37:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 22 04:37:52 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 22 04:37:52 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 04:37:52 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 04:37:52 np0005591760 python3.9[112936]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:37:52 np0005591760 affectionate_black[112881]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:37:52 np0005591760 affectionate_black[112881]: --> All data devices are unavailable
Jan 22 04:37:52 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 22 04:37:52 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 22 04:37:52 np0005591760 systemd[1]: libpod-31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1.scope: Deactivated successfully.
Jan 22 04:37:52 np0005591760 podman[112834]: 2026-01-22 09:37:52.365407688 +0000 UTC m=+0.465753263 container died 31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:37:52 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1799cfc786c57a0807a55d7a089fef1b5a5ec7ecb172e3321fd2ed0df744c06e-merged.mount: Deactivated successfully.
Jan 22 04:37:52 np0005591760 podman[112834]: 2026-01-22 09:37:52.402131683 +0000 UTC m=+0.502477257 container remove 31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:37:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:52 np0005591760 systemd[1]: libpod-conmon-31c72c5f5a6e1bf8f260572a1b9416b98400437dabacf39387e5a37a46aa8ef1.scope: Deactivated successfully.
Jan 22 04:37:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:52 np0005591760 python3.9[113167]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 04:37:52 np0005591760 podman[113190]: 2026-01-22 09:37:52.94241741 +0000 UTC m=+0.039170759 container create b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shaw, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:37:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5b260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:52 np0005591760 systemd[1]: Started libpod-conmon-b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204.scope.
Jan 22 04:37:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:37:53 np0005591760 podman[113190]: 2026-01-22 09:37:53.007463189 +0000 UTC m=+0.104216548 container init b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shaw, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:37:53 np0005591760 podman[113190]: 2026-01-22 09:37:53.014600806 +0000 UTC m=+0.111354145 container start b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shaw, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:37:53 np0005591760 podman[113190]: 2026-01-22 09:37:53.018435404 +0000 UTC m=+0.115188743 container attach b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:37:53 np0005591760 elated_shaw[113215]: 167 167
Jan 22 04:37:53 np0005591760 systemd[1]: libpod-b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204.scope: Deactivated successfully.
Jan 22 04:37:53 np0005591760 podman[113190]: 2026-01-22 09:37:53.020419055 +0000 UTC m=+0.117172534 container died b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shaw, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:37:53 np0005591760 podman[113190]: 2026-01-22 09:37:52.925615904 +0000 UTC m=+0.022369263 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:53 np0005591760 systemd[1]: var-lib-containers-storage-overlay-08b3d735a22436d392b8efebf90bf4cea3c46d060dbd9ff8756166eaf2ed5572-merged.mount: Deactivated successfully.
Jan 22 04:37:53 np0005591760 podman[113190]: 2026-01-22 09:37:53.04512381 +0000 UTC m=+0.141877149 container remove b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shaw, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:37:53 np0005591760 systemd[1]: libpod-conmon-b627e91c197c24a54abd6773aa0d62d6d2ba79bfc29edd23f0e035789fb7c204.scope: Deactivated successfully.
Jan 22 04:37:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.203237487 +0000 UTC m=+0.046392844 container create cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:37:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 04:37:53 np0005591760 systemd[1]: Started libpod-conmon-cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597.scope.
Jan 22 04:37:53 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:37:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404effc344cfb68e2aa00df77c8952a378f47789aa249c546fd13c0881148e3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404effc344cfb68e2aa00df77c8952a378f47789aa249c546fd13c0881148e3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404effc344cfb68e2aa00df77c8952a378f47789aa249c546fd13c0881148e3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404effc344cfb68e2aa00df77c8952a378f47789aa249c546fd13c0881148e3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.27397147 +0000 UTC m=+0.117126837 container init cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.185556523 +0000 UTC m=+0.028711891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.28252175 +0000 UTC m=+0.125677097 container start cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_albattani, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.284718571 +0000 UTC m=+0.127873918 container attach cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_albattani, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:37:53 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.c scrub starts
Jan 22 04:37:53 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.c scrub ok
Jan 22 04:37:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:53 np0005591760 python3.9[113394]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]: {
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:    "0": [
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:        {
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "devices": [
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "/dev/loop3"
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            ],
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "lv_name": "ceph_lv0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "lv_size": "21470642176",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "name": "ceph_lv0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "tags": {
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.cluster_name": "ceph",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.crush_device_class": "",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.encrypted": "0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.osd_id": "0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.type": "block",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.vdo": "0",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:                "ceph.with_tpm": "0"
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            },
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "type": "block",
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:            "vg_name": "ceph_vg0"
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:        }
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]:    ]
Jan 22 04:37:53 np0005591760 vigilant_albattani[113339]: }
Jan 22 04:37:53 np0005591760 systemd[1]: libpod-cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597.scope: Deactivated successfully.
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.574682023 +0000 UTC m=+0.417837369 container died cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:37:53 np0005591760 systemd[1]: var-lib-containers-storage-overlay-404effc344cfb68e2aa00df77c8952a378f47789aa249c546fd13c0881148e3f-merged.mount: Deactivated successfully.
Jan 22 04:37:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823126793s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 273.659820557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:53 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:53 np0005591760 podman[113302]: 2026-01-22 09:37:53.605739554 +0000 UTC m=+0.448894901 container remove cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_albattani, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:37:53 np0005591760 systemd[1]: libpod-conmon-cbe04cda477247d341946a7b122909aa3a5ef0fd96175294d07d4b0a1a795597.scope: Deactivated successfully.
Jan 22 04:37:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5b260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 125 B/s, 2 objects/s recovering
Jan 22 04:37:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 22 04:37:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.120004836 +0000 UTC m=+0.035738328 container create d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:37:54 np0005591760 systemd[1]: Started libpod-conmon-d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf.scope.
Jan 22 04:37:54 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.195277265 +0000 UTC m=+0.111010757 container init d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_nash, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.200899304 +0000 UTC m=+0.116632786 container start d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.202235913 +0000 UTC m=+0.117969415 container attach d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.105743709 +0000 UTC m=+0.021477211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:54 np0005591760 compassionate_nash[113580]: 167 167
Jan 22 04:37:54 np0005591760 systemd[1]: libpod-d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf.scope: Deactivated successfully.
Jan 22 04:37:54 np0005591760 conmon[113580]: conmon d4769c5a3b540fcf33d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf.scope/container/memory.events
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.208183287 +0000 UTC m=+0.123916769 container died d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:37:54 np0005591760 systemd[1]: var-lib-containers-storage-overlay-067c61e28019f72d8efb31f89d37ba64ed2a570d2e41f242ef6a010ba2cee4da-merged.mount: Deactivated successfully.
Jan 22 04:37:54 np0005591760 podman[113567]: 2026-01-22 09:37:54.230354635 +0000 UTC m=+0.146088117 container remove d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:37:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 22 04:37:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 04:37:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 04:37:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 04:37:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 22 04:37:54 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 22 04:37:54 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176632881s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 273.659759521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:54 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:54 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:54 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:54 np0005591760 systemd[1]: libpod-conmon-d4769c5a3b540fcf33d87cc5ea1156c70a4efe2a808caa56b7ab529acb2c04cf.scope: Deactivated successfully.
Jan 22 04:37:54 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.b scrub starts
Jan 22 04:37:54 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.b scrub ok
Jan 22 04:37:54 np0005591760 podman[113629]: 2026-01-22 09:37:54.386156766 +0000 UTC m=+0.039223910 container create 799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:37:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:54 np0005591760 systemd[1]: Started libpod-conmon-799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4.scope.
Jan 22 04:37:54 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:37:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1454e3ab007ad90d4f92221e6243752233fadc9fc883c41106aa65bbb41d9a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1454e3ab007ad90d4f92221e6243752233fadc9fc883c41106aa65bbb41d9a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1454e3ab007ad90d4f92221e6243752233fadc9fc883c41106aa65bbb41d9a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1454e3ab007ad90d4f92221e6243752233fadc9fc883c41106aa65bbb41d9a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:37:54 np0005591760 podman[113629]: 2026-01-22 09:37:54.461079956 +0000 UTC m=+0.114147090 container init 799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elgamal, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:37:54 np0005591760 podman[113629]: 2026-01-22 09:37:54.370303326 +0000 UTC m=+0.023370470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:37:54 np0005591760 podman[113629]: 2026-01-22 09:37:54.468753292 +0000 UTC m=+0.121820427 container start 799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elgamal, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:37:54 np0005591760 podman[113629]: 2026-01-22 09:37:54.476106586 +0000 UTC m=+0.129173720 container attach 799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:37:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:54 np0005591760 python3.9[113692]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:37:54 np0005591760 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 04:37:54 np0005591760 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 04:37:54 np0005591760 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 04:37:54 np0005591760 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 04:37:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb034002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:54 np0005591760 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 04:37:55 np0005591760 affectionate_elgamal[113689]: {}
Jan 22 04:37:55 np0005591760 lvm[113802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:37:55 np0005591760 lvm[113802]: VG ceph_vg0 finished
Jan 22 04:37:55 np0005591760 systemd[1]: libpod-799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4.scope: Deactivated successfully.
Jan 22 04:37:55 np0005591760 systemd[1]: libpod-799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4.scope: Consumed 1.060s CPU time.
Jan 22 04:37:55 np0005591760 podman[113629]: 2026-01-22 09:37:55.122823895 +0000 UTC m=+0.775891028 container died 799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:37:55 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f1454e3ab007ad90d4f92221e6243752233fadc9fc883c41106aa65bbb41d9a7-merged.mount: Deactivated successfully.
Jan 22 04:37:55 np0005591760 podman[113629]: 2026-01-22 09:37:55.155450124 +0000 UTC m=+0.808517258 container remove 799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elgamal, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:37:55 np0005591760 systemd[1]: libpod-conmon-799025516df062167d20d4c0b33fad077e19220014734798b38e733aee70b6b4.scope: Deactivated successfully.
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 22 04:37:55 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 22 04:37:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:37:55 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:55 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 22 04:37:55 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 22 04:37:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:55.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:55 np0005591760 python3.9[113964]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 04:37:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb034002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 2 objects/s recovering
Jan 22 04:37:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 22 04:37:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 22 04:37:56 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 22 04:37:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991450310s) [1] async=[1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 281.493499756s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:56 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:37:56 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 22 04:37:56 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 22 04:37:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5b260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 22 04:37:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 22 04:37:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.993288994s) [1] async=[1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 282.506774902s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:37:57 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:37:57 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 22 04:37:57 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 22 04:37:57 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 22 04:37:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:37:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:37:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:57] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 22 04:37:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:37:57] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 22 04:37:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb034002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:37:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:37:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 22 04:37:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 22 04:37:58 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 22 04:37:58 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 22 04:37:58 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 22 04:37:58 np0005591760 python3.9[114119]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:37:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000019s ======
Jan 22 04:37:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:37:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000019s
Jan 22 04:37:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:59 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 22 04:37:59 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 22 04:37:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:37:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:37:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:37:59.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:37:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:37:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:37:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:00 np0005591760 python3.9[114274]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:38:00 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 22 04:38:00 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 22 04:38:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb034002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:00 np0005591760 systemd[1]: session-38.scope: Deactivated successfully.
Jan 22 04:38:00 np0005591760 systemd[1]: session-38.scope: Consumed 52.830s CPU time.
Jan 22 04:38:00 np0005591760 systemd-logind[747]: Session 38 logged out. Waiting for processes to exit.
Jan 22 04:38:00 np0005591760 systemd-logind[747]: Removed session 38.
Jan 22 04:38:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:00.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:01 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 22 04:38:01 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 22 04:38:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 14 op/s; 18 B/s, 1 objects/s recovering
Jan 22 04:38:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 22 04:38:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 04:38:02 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 22 04:38:02 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 22 04:38:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 22 04:38:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 04:38:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 22 04:38:02 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.111286163s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 281.659820557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:02 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:02 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 22 04:38:02 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 04:38:02 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 04:38:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5c750 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5c750 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 22 04:38:03 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:03 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:03 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Jan 22 04:38:03 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 04:38:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:03.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 14 op/s; 18 B/s, 1 objects/s recovering
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 22 04:38:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 22 04:38:04 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 22 04:38:04 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 04:38:04 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 04:38:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:04 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:38:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:04.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:05 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 22 04:38:05 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 22 04:38:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 22 04:38:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 22 04:38:05 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 22 04:38:05 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148774147s) [1] async=[1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 290.725952148s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:05 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:05 np0005591760 systemd-logind[747]: New session 39 of user zuul.
Jan 22 04:38:05 np0005591760 systemd[1]: Started Session 39 of User zuul.
Jan 22 04:38:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 1 active+remapped, 1 active+clean+scrubbing, 335 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 22 04:38:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 22 04:38:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 04:38:06 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Jan 22 04:38:06 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Jan 22 04:38:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 04:38:06 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 04:38:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 22 04:38:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 04:38:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 22 04:38:06 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 22 04:38:06 np0005591760 python3.9[114461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00af30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:06.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00af30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:07.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:07.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:07.161Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:07 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 22 04:38:07 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 22 04:38:07 np0005591760 python3.9[114618]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 04:38:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 04:38:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:07.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:07] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 22 04:38:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:07] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 22 04:38:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 1 active+remapped, 1 active+clean+scrubbing, 335 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Jan 22 04:38:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 22 04:38:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 04:38:08 np0005591760 python3.9[114771]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:38:08 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 22 04:38:08 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 22 04:38:08 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 22 04:38:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:38:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:08.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:38:08 np0005591760 python3.9[114856]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 04:38:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00af30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:09 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 22 04:38:09 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 22 04:38:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 22 04:38:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 04:38:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 22 04:38:09 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 22 04:38:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:09.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00af30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 1 active+remapped, 1 active+clean+scrubbing, 335 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 22 04:38:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 04:38:10 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 22 04:38:10 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 22 04:38:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 22 04:38:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 04:38:10 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 04:38:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 04:38:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 22 04:38:10 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 22 04:38:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:10 np0005591760 python3.9[115036]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:11 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Jan 22 04:38:11 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Jan 22 04:38:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 22 04:38:11 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 04:38:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 22 04:38:11 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 22 04:38:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:38:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:38:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00af30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:12 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.1c deep-scrub starts
Jan 22 04:38:12 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.1c deep-scrub ok
Jan 22 04:38:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 22 04:38:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 22 04:38:12 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 22 04:38:12 np0005591760 python3.9[115191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:38:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00af30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:12.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:13 np0005591760 python3.9[115345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:13 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Jan 22 04:38:13 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Jan 22 04:38:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:38:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 22 04:38:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 22 04:38:13 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 22 04:38:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:13.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:13 np0005591760 python3.9[115497]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 04:38:14 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 22 04:38:14 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 22 04:38:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 22 04:38:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 22 04:38:14 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 22 04:38:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00c1b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:14.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:14 np0005591760 python3.9[115648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00c1b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:15 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 22 04:38:15 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 22 04:38:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:15.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:15 np0005591760 python3.9[115807]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 1 objects/s recovering
Jan 22 04:38:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 22 04:38:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 04:38:16 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 22 04:38:16 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 22 04:38:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 22 04:38:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 04:38:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 22 04:38:16 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 22 04:38:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 04:38:16 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 04:38:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:16.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:16.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:16.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00c1b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:17 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 22 04:38:17 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 22 04:38:17 np0005591760 python3.9[115962]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:38:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:38:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:17.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:38:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 04:38:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:17] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 22 04:38:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:17] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 22 04:38:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00c1b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 1 objects/s recovering
Jan 22 04:38:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 22 04:38:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 04:38:18 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Jan 22 04:38:18 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 04:38:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 04:38:18 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 04:38:18 np0005591760 python3.9[116250]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 04:38:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:19 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Jan 22 04:38:19 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Jan 22 04:38:19 np0005591760 python3.9[116402]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:38:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:19.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:19 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 04:38:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00c1b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:19 np0005591760 python3.9[116556]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Jan 22 04:38:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 22 04:38:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 04:38:20 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Jan 22 04:38:20 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Jan 22 04:38:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb050003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 22 04:38:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 04:38:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 22 04:38:20 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 22 04:38:20 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:20 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 04:38:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 04:38:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:20.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:21 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 22 04:38:21 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 22 04:38:21 np0005591760 python3.9[116711]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:21.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 22 04:38:21 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:21 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.484966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074701485077, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3163, "num_deletes": 251, "total_data_size": 5452896, "memory_usage": 5535536, "flush_reason": "Manual Compaction"}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074701495584, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 5176700, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7434, "largest_seqno": 10596, "table_properties": {"data_size": 5160903, "index_size": 10179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4549, "raw_key_size": 42388, "raw_average_key_size": 23, "raw_value_size": 5125676, "raw_average_value_size": 2825, "num_data_blocks": 441, "num_entries": 1814, "num_filter_entries": 1814, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074588, "oldest_key_time": 1769074588, "file_creation_time": 1769074701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10721 microseconds, and 7519 cpu microseconds.
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.495751) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 5176700 bytes OK
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.495895) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.496336) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.496349) EVENT_LOG_v1 {"time_micros": 1769074701496346, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.496364) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5437680, prev total WAL file size 5437680, number of live WAL files 2.
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.497657) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5055KB)], [23(12MB)]
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074701497692, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18134319, "oldest_snapshot_seqno": -1}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3911 keys, 14129743 bytes, temperature: kUnknown
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074701534386, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14129743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14098105, "index_size": 20776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 99349, "raw_average_key_size": 25, "raw_value_size": 14020901, "raw_average_value_size": 3584, "num_data_blocks": 898, "num_entries": 3911, "num_filter_entries": 3911, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769074701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.534618) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14129743 bytes
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.535151) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 493.1 rd, 384.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 12.4 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 4442, records dropped: 531 output_compression: NoCompression
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.535172) EVENT_LOG_v1 {"time_micros": 1769074701535162, "job": 8, "event": "compaction_finished", "compaction_time_micros": 36773, "compaction_time_cpu_micros": 21456, "output_level": 6, "num_output_files": 1, "total_output_size": 14129743, "num_input_records": 4442, "num_output_records": 3911, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074701535971, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074701537738, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.497591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.537799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.537803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.537805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.537807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:38:21.537808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:38:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 22 04:38:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 04:38:22 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 22 04:38:22 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 22 04:38:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:22 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00c1b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 04:38:22 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 04:38:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:22 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb050004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:23 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 22 04:38:23 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 22 04:38:23 np0005591760 python3.9[116866]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:38:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 22 04:38:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 22 04:38:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:23 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 22 04:38:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:23.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:23 np0005591760 python3.9[117020]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 22 04:38:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 22 04:38:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 04:38:23 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:24 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 22 04:38:24 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 22 04:38:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 22 04:38:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:24 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:38:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 04:38:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 04:38:24 np0005591760 systemd[1]: session-39.scope: Deactivated successfully.
Jan 22 04:38:24 np0005591760 systemd[1]: session-39.scope: Consumed 14.839s CPU time.
Jan 22 04:38:24 np0005591760 systemd-logind[747]: Session 39 logged out. Waiting for processes to exit.
Jan 22 04:38:24 np0005591760 systemd-logind[747]: Removed session 39.
Jan 22 04:38:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:24.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:25 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 22 04:38:25 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 22 04:38:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 22 04:38:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 22 04:38:25 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 22 04:38:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:25 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:25.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb050004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 1 remapped+peering, 1 active+recovering+remapped, 335 active+clean; 459 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 1/225 objects misplaced (0.444%); 54 B/s, 1 objects/s recovering
Jan 22 04:38:26 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 22 04:38:26 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 22 04:38:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 22 04:38:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 22 04:38:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:26 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:26 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 22 04:38:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:26.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:26.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:26.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:26.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:27 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 22 04:38:27 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 22 04:38:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 22 04:38:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 22 04:38:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:27 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 22 04:38:27 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:38:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:27.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:27] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Jan 22 04:38:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:27] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Jan 22 04:38:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 1 remapped+peering, 1 active+recovering+remapped, 335 active+clean; 459 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 1/225 objects misplaced (0.444%); 54 B/s, 1 objects/s recovering
Jan 22 04:38:28 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 22 04:38:28 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 22 04:38:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 22 04:38:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 22 04:38:28 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 22 04:38:28 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:38:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:28 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb050004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:28 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:29 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 22 04:38:29 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 22 04:38:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:29 np0005591760 systemd-logind[747]: New session 40 of user zuul.
Jan 22 04:38:29 np0005591760 systemd[1]: Started Session 40 of User zuul.
Jan 22 04:38:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 1 remapped+peering, 1 active+recovering+remapped, 335 active+clean; 459 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 220 B/s rd, 0 op/s; 1/225 objects misplaced (0.444%); 47 B/s, 1 objects/s recovering
Jan 22 04:38:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=infra.usagestats t=2026-01-22T09:38:29.980004892Z level=info msg="Usage stats are ready to report"
Jan 22 04:38:30 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 22 04:38:30 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 22 04:38:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:30 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:30 np0005591760 python3.9[117230]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:30 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:31 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 22 04:38:31 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 22 04:38:31 np0005591760 python3.9[117385]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:38:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 22 04:38:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 04:38:32 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 22 04:38:32 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 22 04:38:32 np0005591760 python3.9[117579]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:38:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:32 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 22 04:38:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 04:38:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 22 04:38:32 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 22 04:38:32 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 04:38:32 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 04:38:32 np0005591760 systemd[1]: session-40.scope: Deactivated successfully.
Jan 22 04:38:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:32 np0005591760 systemd[1]: session-40.scope: Consumed 1.787s CPU time.
Jan 22 04:38:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:38:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:32.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:38:32 np0005591760 systemd-logind[747]: Session 40 logged out. Waiting for processes to exit.
Jan 22 04:38:32 np0005591760 systemd-logind[747]: Removed session 40.
Jan 22 04:38:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:32 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:33 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 22 04:38:33 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 22 04:38:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:38:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:33.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:38:33 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 04:38:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 22 04:38:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 04:38:34 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 22 04:38:34 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 22 04:38:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 22 04:38:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 04:38:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 22 04:38:34 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 22 04:38:34 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 04:38:34 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 04:38:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:35 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Jan 22 04:38:35 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Jan 22 04:38:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:35.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 22 04:38:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 22 04:38:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 22 04:38:35 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 04:38:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 1 unknown, 336 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:36 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 22 04:38:36 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 22 04:38:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:36 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c003120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 22 04:38:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 22 04:38:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 22 04:38:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:36.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:36.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:36.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:36.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:36.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:36 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Jan 22 04:38:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Jan 22 04:38:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:37.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:37] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 22 04:38:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:37] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 22 04:38:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 22 04:38:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 22 04:38:37 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 22 04:38:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:37 np0005591760 systemd-logind[747]: New session 41 of user zuul.
Jan 22 04:38:37 np0005591760 systemd[1]: Started Session 41 of User zuul.
Jan 22 04:38:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 1 unknown, 336 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:38 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 22 04:38:38 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 22 04:38:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:38 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c003120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:38 np0005591760 python3.9[117765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 22 04:38:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 22 04:38:38 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 22 04:38:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:38.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:38 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:39 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 22 04:38:39 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 22 04:38:39 np0005591760 python3.9[117920]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:39.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 1 unknown, 336 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:40 np0005591760 python3.9[118076]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:38:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:40 np0005591760 python3.9[118161]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c003120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:41.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 22 04:38:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 04:38:42 np0005591760 python3.9[118316]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:38:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:42 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 22 04:38:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 04:38:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 22 04:38:42 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 22 04:38:42 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 04:38:42 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 04:38:42 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:38:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:42.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:38:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:42 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 22 04:38:43 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:43 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:43 np0005591760 python3.9[118512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:38:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 04:38:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c003120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 22 04:38:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:38:43 np0005591760 python3.9[118664]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 22 04:38:44 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:44 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:44 np0005591760 python3.9[118826]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 04:38:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 04:38:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:44 np0005591760 python3.9[118905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:38:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:44 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 22 04:38:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 22 04:38:45 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 22 04:38:45 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:45 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:45 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:45 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 04:38:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:45.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:45 np0005591760 python3.9[119057]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:38:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:45 np0005591760 python3.9[119135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:38:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 459 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 22 04:38:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 22 04:38:46 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 22 04:38:46 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:38:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00d2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:46 np0005591760 python3.9[119288]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:38:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:46.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:46 np0005591760 python3.9[119441]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:38:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:46.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:46.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:46.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:46.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0500068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:47 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Jan 22 04:38:47 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Jan 22 04:38:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 22 04:38:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 22 04:38:47 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 22 04:38:47 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 04:38:47 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 04:38:47 np0005591760 python3.9[119594]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:38:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000020s ======
Jan 22 04:38:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:47.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Jan 22 04:38:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:47] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 22 04:38:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:47] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 22 04:38:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 459 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:47 np0005591760 python3.9[119746]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:38:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 22 04:38:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 22 04:38:48 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 22 04:38:48 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 04:38:48 np0005591760 python3.9[119899]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:48.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540bfd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:49 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 22 04:38:49 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:38:49
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Some PGs (0.005935) are inactive; try again later
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:38:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:49.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:38:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 459 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:50 np0005591760 python3.9[120079]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:38:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:50.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:50 np0005591760 python3.9[120234]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:38:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:51 np0005591760 python3.9[120386]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:38:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:51.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 459 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:51 np0005591760 python3.9[120538]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:38:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:52 np0005591760 python3.9[120692]: ansible-service_facts Invoked
Jan 22 04:38:52 np0005591760 network[120709]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:38:52 np0005591760 network[120710]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:38:52 np0005591760 network[120711]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:38:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:52.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:53.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c0a90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 459 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:54.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:55.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:55 np0005591760 python3.9[121216]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:38:55 np0005591760 podman[121278]: 2026-01-22 09:38:55.995255523 +0000 UTC m=+0.056630275 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:38:56 np0005591760 podman[121278]: 2026-01-22 09:38:56.080131774 +0000 UTC m=+0.141506527 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 04:38:56 np0005591760 podman[121372]: 2026-01-22 09:38:56.418862699 +0000 UTC m=+0.039515125 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:38:56 np0005591760 podman[121372]: 2026-01-22 09:38:56.426978571 +0000 UTC m=+0.047630978 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:38:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c0a90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:56 np0005591760 podman[121459]: 2026-01-22 09:38:56.719196308 +0000 UTC m=+0.038482318 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:38:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:56.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:56 np0005591760 podman[121459]: 2026-01-22 09:38:56.740379658 +0000 UTC m=+0.059665639 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:38:56 np0005591760 podman[121516]: 2026-01-22 09:38:56.913064791 +0000 UTC m=+0.039638529 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:38:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:56.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:56.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:56.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:38:56.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:38:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:57 np0005591760 podman[121516]: 2026-01-22 09:38:57.082239944 +0000 UTC m=+0.208813682 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:38:57 np0005591760 podman[121601]: 2026-01-22 09:38:57.244917979 +0000 UTC m=+0.038806419 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:38:57 np0005591760 podman[121601]: 2026-01-22 09:38:57.280168291 +0000 UTC m=+0.074056721 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:38:57 np0005591760 podman[121652]: 2026-01-22 09:38:57.438349727 +0000 UTC m=+0.038754802 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, release=1793, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 04:38:57 np0005591760 podman[121652]: 2026-01-22 09:38:57.446022464 +0000 UTC m=+0.046427529 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Jan 22 04:38:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:57.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:57] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 22 04:38:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:38:57] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 22 04:38:57 np0005591760 podman[121754]: 2026-01-22 09:38:57.605091914 +0000 UTC m=+0.037509484 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:38:57 np0005591760 podman[121754]: 2026-01-22 09:38:57.627367534 +0000 UTC m=+0.059785084 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:38:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:57 np0005591760 podman[121803]: 2026-01-22 09:38:57.741135379 +0000 UTC m=+0.034890996 container exec d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:38:57 np0005591760 podman[121803]: 2026-01-22 09:38:57.751973314 +0000 UTC m=+0.045728921 container exec_died d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:38:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:38:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:38:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 python3.9[121936]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:38:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:38:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:38:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.855559948 +0000 UTC m=+0.030901344 container create d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_einstein, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 04:38:58 np0005591760 systemd[1]: Started libpod-conmon-d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524.scope.
Jan 22 04:38:58 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.910643617 +0000 UTC m=+0.085985023 container init d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.915960481 +0000 UTC m=+0.091301878 container start d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_einstein, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.917103676 +0000 UTC m=+0.092445073 container attach d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:38:58 np0005591760 infallible_einstein[122162]: 167 167
Jan 22 04:38:58 np0005591760 systemd[1]: libpod-d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524.scope: Deactivated successfully.
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.920257404 +0000 UTC m=+0.095598800 container died d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_einstein, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:38:58 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1ac77f6473294c47b01e874ae3b199b6d4905027e3398671ecdf4831303b7a38-merged.mount: Deactivated successfully.
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.8440149 +0000 UTC m=+0.019356316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:38:58 np0005591760 podman[122122]: 2026-01-22 09:38:58.940640845 +0000 UTC m=+0.115982242 container remove d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_einstein, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:38:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:38:58 np0005591760 systemd[1]: libpod-conmon-d62b64b0e134e937805af18c0acd43a1a39f8c60f3bde5398075de0315e47524.scope: Deactivated successfully.
Jan 22 04:38:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c0a90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.067853851 +0000 UTC m=+0.033553466 container create 073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:38:59 np0005591760 systemd[1]: Started libpod-conmon-073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a.scope.
Jan 22 04:38:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:38:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0768ebb03cecdbb010c24cbfbd29485ce88262b2a2c65e3ddc162180e1385083/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:38:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0768ebb03cecdbb010c24cbfbd29485ce88262b2a2c65e3ddc162180e1385083/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:38:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0768ebb03cecdbb010c24cbfbd29485ce88262b2a2c65e3ddc162180e1385083/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:38:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0768ebb03cecdbb010c24cbfbd29485ce88262b2a2c65e3ddc162180e1385083/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:38:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0768ebb03cecdbb010c24cbfbd29485ce88262b2a2c65e3ddc162180e1385083/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.130376094 +0000 UTC m=+0.096075709 container init 073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.13628245 +0000 UTC m=+0.101982065 container start 073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.151804698 +0000 UTC m=+0.117504331 container attach 073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.054335072 +0000 UTC m=+0.020034706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:38:59 np0005591760 python3.9[122300]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:38:59 np0005591760 cranky_ritchie[122296]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:38:59 np0005591760 cranky_ritchie[122296]: --> All data devices are unavailable
Jan 22 04:38:59 np0005591760 systemd[1]: libpod-073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a.scope: Deactivated successfully.
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.399509327 +0000 UTC m=+0.365208940 container died 073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:38:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0768ebb03cecdbb010c24cbfbd29485ce88262b2a2c65e3ddc162180e1385083-merged.mount: Deactivated successfully.
Jan 22 04:38:59 np0005591760 podman[122237]: 2026-01-22 09:38:59.424476671 +0000 UTC m=+0.390176285 container remove 073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:38:59 np0005591760 systemd[1]: libpod-conmon-073ef1ef747a8e474a1b8ae9a04b25fdf1749baef9f508672ae7ee6ab74a060a.scope: Deactivated successfully.
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:38:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:38:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:38:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:38:59.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:38:59 np0005591760 python3.9[122404]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:38:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:38:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:38:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.861237228 +0000 UTC m=+0.032788693 container create 78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:38:59 np0005591760 systemd[1]: Started libpod-conmon-78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531.scope.
Jan 22 04:38:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.917203771 +0000 UTC m=+0.088755256 container init 78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.923938288 +0000 UTC m=+0.095489754 container start 78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.924998858 +0000 UTC m=+0.096550324 container attach 78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:38:59 np0005591760 happy_lehmann[122584]: 167 167
Jan 22 04:38:59 np0005591760 systemd[1]: libpod-78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531.scope: Deactivated successfully.
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.928381948 +0000 UTC m=+0.099933522 container died 78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 04:38:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8611419a2882c33b6039e21944656297a516e94d3713487725b6ab7d7a2513c6-merged.mount: Deactivated successfully.
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.849175807 +0000 UTC m=+0.020727302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:38:59 np0005591760 podman[122548]: 2026-01-22 09:38:59.947134335 +0000 UTC m=+0.118685799 container remove 78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:38:59 np0005591760 systemd[1]: libpod-conmon-78da21f1b662d5a623bd8b754a037b63200aecb394cea098247e08f298657531.scope: Deactivated successfully.
Jan 22 04:39:00 np0005591760 podman[122672]: 2026-01-22 09:39:00.076804553 +0000 UTC m=+0.033467563 container create 77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:39:00 np0005591760 systemd[1]: Started libpod-conmon-77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8.scope.
Jan 22 04:39:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:39:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89ff0c09bef13b2d04b78fa505fc4e7ef44232fac87da7a7f7cb67e3aef61e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89ff0c09bef13b2d04b78fa505fc4e7ef44232fac87da7a7f7cb67e3aef61e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89ff0c09bef13b2d04b78fa505fc4e7ef44232fac87da7a7f7cb67e3aef61e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89ff0c09bef13b2d04b78fa505fc4e7ef44232fac87da7a7f7cb67e3aef61e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:00 np0005591760 podman[122672]: 2026-01-22 09:39:00.132046319 +0000 UTC m=+0.088709349 container init 77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_newton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:39:00 np0005591760 podman[122672]: 2026-01-22 09:39:00.139572189 +0000 UTC m=+0.096235199 container start 77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_newton, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:39:00 np0005591760 podman[122672]: 2026-01-22 09:39:00.141171374 +0000 UTC m=+0.097834374 container attach 77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:39:00 np0005591760 podman[122672]: 2026-01-22 09:39:00.063046202 +0000 UTC m=+0.019709222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:39:00 np0005591760 python3.9[122670]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:00 np0005591760 jovial_newton[122685]: {
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:    "0": [
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:        {
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "devices": [
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "/dev/loop3"
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            ],
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "lv_name": "ceph_lv0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "lv_size": "21470642176",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "name": "ceph_lv0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "tags": {
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.cluster_name": "ceph",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.crush_device_class": "",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.encrypted": "0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.osd_id": "0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.type": "block",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.vdo": "0",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:                "ceph.with_tpm": "0"
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            },
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "type": "block",
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:            "vg_name": "ceph_vg0"
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:        }
Jan 22 04:39:00 np0005591760 jovial_newton[122685]:    ]
Jan 22 04:39:00 np0005591760 jovial_newton[122685]: }
Jan 22 04:39:00 np0005591760 systemd[1]: libpod-77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8.scope: Deactivated successfully.
Jan 22 04:39:00 np0005591760 podman[122772]: 2026-01-22 09:39:00.428433627 +0000 UTC m=+0.018257775 container died 77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_newton, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:39:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ae89ff0c09bef13b2d04b78fa505fc4e7ef44232fac87da7a7f7cb67e3aef61e-merged.mount: Deactivated successfully.
Jan 22 04:39:00 np0005591760 podman[122772]: 2026-01-22 09:39:00.450544337 +0000 UTC m=+0.040368474 container remove 77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:39:00 np0005591760 systemd[1]: libpod-conmon-77b0a0b7699deb732b984fe303a8cfc8d5b5cf48447ae553d61b0376cdc5a9f8.scope: Deactivated successfully.
Jan 22 04:39:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:00 np0005591760 python3.9[122771]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.890941851 +0000 UTC m=+0.030995121 container create 11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bardeen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:39:00 np0005591760 systemd[1]: Started libpod-conmon-11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316.scope.
Jan 22 04:39:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.943668054 +0000 UTC m=+0.083721314 container init 11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.948223203 +0000 UTC m=+0.088276463 container start 11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bardeen, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.94954245 +0000 UTC m=+0.089595710 container attach 11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bardeen, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 04:39:00 np0005591760 interesting_bardeen[122904]: 167 167
Jan 22 04:39:00 np0005591760 systemd[1]: libpod-11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316.scope: Deactivated successfully.
Jan 22 04:39:00 np0005591760 conmon[122904]: conmon 11f5846e55814b475ef9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316.scope/container/memory.events
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.95263433 +0000 UTC m=+0.092687590 container died 11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:39:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f134a578f1c4ad3af738bedadff5291662e5a8d60200707ca2e892ec90523c6f-merged.mount: Deactivated successfully.
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.878008485 +0000 UTC m=+0.018061745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:39:00 np0005591760 podman[122891]: 2026-01-22 09:39:00.978547368 +0000 UTC m=+0.118600628 container remove 11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:39:00 np0005591760 systemd[1]: libpod-conmon-11f5846e55814b475ef902ff9a612bffb7305df3c27839151c06b89c777c0316.scope: Deactivated successfully.
Jan 22 04:39:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.095577624 +0000 UTC m=+0.030185354 container create a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:39:01 np0005591760 systemd[1]: Started libpod-conmon-a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66.scope.
Jan 22 04:39:01 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:39:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9b2453d406dd777f9e1496eab4c0b7d80b700409ab608c46f7afee0b4ddac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9b2453d406dd777f9e1496eab4c0b7d80b700409ab608c46f7afee0b4ddac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9b2453d406dd777f9e1496eab4c0b7d80b700409ab608c46f7afee0b4ddac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9b2453d406dd777f9e1496eab4c0b7d80b700409ab608c46f7afee0b4ddac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.161461135 +0000 UTC m=+0.096068875 container init a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_galois, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.168675708 +0000 UTC m=+0.103283438 container start a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_galois, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.169661176 +0000 UTC m=+0.104268906 container attach a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.084385572 +0000 UTC m=+0.018993312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:39:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:01.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:01 np0005591760 elegant_galois[122939]: {}
Jan 22 04:39:01 np0005591760 lvm[123142]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:39:01 np0005591760 lvm[123142]: VG ceph_vg0 finished
Jan 22 04:39:01 np0005591760 systemd[1]: libpod-a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66.scope: Deactivated successfully.
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.706069525 +0000 UTC m=+0.640677265 container died a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:39:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c1b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7b9b2453d406dd777f9e1496eab4c0b7d80b700409ab608c46f7afee0b4ddac4-merged.mount: Deactivated successfully.
Jan 22 04:39:01 np0005591760 podman[122925]: 2026-01-22 09:39:01.735264041 +0000 UTC m=+0.669871772 container remove a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_galois, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:39:01 np0005591760 systemd[1]: libpod-conmon-a0e0c45ecabb023b1924798a5200ef43c90b95697db7689f7dcf5a78039bae66.scope: Deactivated successfully.
Jan 22 04:39:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:39:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:39:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:39:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:39:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:01 np0005591760 python3.9[123144]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:39:01 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:39:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:02.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:03 np0005591760 python3.9[123333]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:39:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:03.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:03 np0005591760 python3.9[123417]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:39:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c1b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:04 np0005591760 systemd[1]: session-41.scope: Deactivated successfully.
Jan 22 04:39:04 np0005591760 systemd[1]: session-41.scope: Consumed 18.144s CPU time.
Jan 22 04:39:04 np0005591760 systemd-logind[747]: Session 41 logged out. Waiting for processes to exit.
Jan 22 04:39:04 np0005591760 systemd-logind[747]: Removed session 41.
Jan 22 04:39:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:05.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:06.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:06.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:06.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:06.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:07.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:07] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 22 04:39:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:07] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 22 04:39:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:09.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:10 np0005591760 systemd-logind[747]: New session 42 of user zuul.
Jan 22 04:39:10 np0005591760 systemd[1]: Started Session 42 of User zuul.
Jan 22 04:39:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:11 np0005591760 python3.9[123632]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:11 np0005591760 python3.9[123784]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:11 np0005591760 python3.9[123862]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:12 np0005591760 systemd[1]: session-42.scope: Deactivated successfully.
Jan 22 04:39:12 np0005591760 systemd[1]: session-42.scope: Consumed 1.132s CPU time.
Jan 22 04:39:12 np0005591760 systemd-logind[747]: Session 42 logged out. Waiting for processes to exit.
Jan 22 04:39:12 np0005591760 systemd-logind[747]: Removed session 42.
Jan 22 04:39:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:12.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:13.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:14.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:15.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 22 04:39:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:16.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:16.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:16.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:16.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:16.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb04c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:17 np0005591760 systemd-logind[747]: New session 43 of user zuul.
Jan 22 04:39:17 np0005591760 systemd[1]: Started Session 43 of User zuul.
Jan 22 04:39:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:17.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:17] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 22 04:39:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:17] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 22 04:39:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x5624d1a5d460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:17 np0005591760 python3.9[124046]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:39:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:18.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:18 np0005591760 python3.9[124206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:39:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:19.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:19 np0005591760 python3.9[124381]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:19 np0005591760 python3.9[124459]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.l5ykstlw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:20 np0005591760 python3.9[124612]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:20.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:20 np0005591760 python3.9[124691]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mosuvfhl recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:21 np0005591760 python3.9[124843]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:39:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:39:21 np0005591760 python3.9[124995]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:22 np0005591760 python3.9[125074]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:39:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:22 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:22.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:23 np0005591760 python3.9[125227]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:23.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:23 np0005591760 python3.9[125305]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:39:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:24 np0005591760 python3.9[125458]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:24.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:24 np0005591760 python3.9[125611]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:25 np0005591760 python3.9[125689]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:25 np0005591760 python3.9[125841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:39:26 np0005591760 python3.9[125920]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:26.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:26 np0005591760 python3.9[126073]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:39:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:26.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:26 np0005591760 systemd[1]: Reloading.
Jan 22 04:39:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:27 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:39:27 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:39:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:27.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:27.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:27.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:27.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:27] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Jan 22 04:39:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:27] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Jan 22 04:39:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:27 np0005591760 python3.9[126262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:28 np0005591760 python3.9[126341]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:28 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:28 np0005591760 python3.9[126493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:28.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:28 np0005591760 python3.9[126572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:29.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:29 np0005591760 python3.9[126724]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:39:29 np0005591760 systemd[1]: Reloading.
Jan 22 04:39:29 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:39:29 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:39:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:29 np0005591760 systemd[1]: Starting Create netns directory...
Jan 22 04:39:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:29 np0005591760 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 04:39:29 np0005591760 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 04:39:29 np0005591760 systemd[1]: Finished Create netns directory.
Jan 22 04:39:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:30 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:30 np0005591760 python3.9[126942]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:39:30 np0005591760 network[126959]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:39:30 np0005591760 network[126960]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:39:30 np0005591760 network[126961]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:39:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:30.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:31.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c28a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:39:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:32 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:33 np0005591760 python3.9[127226]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:33 np0005591760 python3.9[127304]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:34 np0005591760 python3.9[127458]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060006210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:34 np0005591760 python3.9[127610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c0023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:35 np0005591760 python3.9[127689]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:35.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060006210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:39:36 np0005591760 python3.9[127842]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 04:39:36 np0005591760 systemd[1]: Starting Time & Date Service...
Jan 22 04:39:36 np0005591760 systemd[1]: Started Time & Date Service.
Jan 22 04:39:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:36 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:36.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:36 np0005591760 python3.9[127999]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:36.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c0023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:37 np0005591760 python3.9[128151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:39:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:37.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:39:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:37] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:39:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:37] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:39:37 np0005591760 python3.9[128229]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:38 np0005591760 python3.9[128382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:38 np0005591760 python3.9[128460]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rsmcsdq3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:38 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:38.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:38 np0005591760 python3.9[128613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:39 np0005591760 python3.9[128691]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:39.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c003350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:40 np0005591760 python3.9[128844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:39:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:40 np0005591760 python3[128998]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 04:39:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060006210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:41 np0005591760 python3.9[129150]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:41 np0005591760 python3.9[129228]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:39:42 np0005591760 python3.9[129381]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:42 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c003350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:39:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:39:42 np0005591760 python3.9[129507]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074781.9348323-894-268394509018044/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:42 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:39:42 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:39:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:43 np0005591760 python3.9[129660]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:43.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:43 np0005591760 python3.9[129738]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060006210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:44 np0005591760 python3.9[129891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:44 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:44 np0005591760 python3.9[129969]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:44.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:45 np0005591760 python3.9[130122]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:45 np0005591760 python3.9[130200]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:45.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:39:46 np0005591760 python3.9[130352]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:39:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060006210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:46 np0005591760 python3.9[130508]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:46.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:46.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:46.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:46.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c0048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:47 np0005591760 python3.9[130661]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:47] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:39:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:47] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:39:47 np0005591760 python3.9[130813]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:48 np0005591760 python3.9[130966]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 04:39:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:49 np0005591760 python3.9[131119]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 04:39:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:39:49
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.meta', 'backups']
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:39:49 np0005591760 systemd[1]: session-43.scope: Deactivated successfully.
Jan 22 04:39:49 np0005591760 systemd[1]: session-43.scope: Consumed 20.448s CPU time.
Jan 22 04:39:49 np0005591760 systemd-logind[747]: Session 43 logged out. Waiting for processes to exit.
Jan 22 04:39:49 np0005591760 systemd-logind[747]: Removed session 43.
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:39:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:49.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c0048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:50.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/093951 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:39:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:51.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:39:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c005600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:52.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:53.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:54 np0005591760 systemd-logind[747]: New session 44 of user zuul.
Jan 22 04:39:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:54 np0005591760 systemd[1]: Started Session 44 of User zuul.
Jan 22 04:39:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:54.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c005600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:55 np0005591760 python3.9[131330]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 04:39:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:55.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:55 np0005591760 python3.9[131482]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:39:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:39:56 np0005591760 python3.9[131637]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 22 04:39:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:56 np0005591760 python3.9[131789]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.a3uax82c follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:39:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:39:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:56.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:39:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:56.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:56.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:56.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:39:56.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:39:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:57 np0005591760 python3.9[131915]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.a3uax82c mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074796.3739314-102-161045277176853/.source.a3uax82c _original_basename=.zuat9nl2 follow=False checksum=29616629aac123748dd219790bea456c41d2072c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:57.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:57] "GET /metrics HTTP/1.1" 200 48392 "" "Prometheus/2.51.0"
Jan 22 04:39:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:39:57] "GET /metrics HTTP/1.1" 200 48392 "" "Prometheus/2.51.0"
Jan 22 04:39:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c005600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 22 04:39:58 np0005591760 python3.9[132068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:39:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:39:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:39:58.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:58 np0005591760 python3.9[132221]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1l/4Hnab8cJ+0NgZRyND+668QQ18xCAMiTa4tJfwkacqv2+xu0AP833wzvRbj+BSz/GJYAjYZHtl/LPY/fgAiwZLhNui+6RFQXnMI+TWlUgadcYlxCFSLNXdeIU4VHKdxnYN8cw8WtM+PFaCdmFRk0NGTRLladuZ2Ft6qgEk/ocZCZ1hweLpc0NBPMupsV5ABFtNEZPBg5lEqxBdbFOY3MxlYJEKWIsWCyxu9jzoxc8ct4ejcM8FVx9pujC2XCWVumSYrXkp9LnbeYCOlxnalYYTgZWNh3ilMYw3g85DVUyF1ZECfbN4/uuu9emfUiC8EmIRofJTX7/IPDpqM0CgSFHt6gq45OgfrZ+YHcpPg8Bq5JWL3rpkIoZDiidmCCGrtku8huN9VGYcahOdJVixsNrfIS2jx9k86e19gNzUSKc3qxM6HCUrH0yEbXwcOcG6b1EcBllpJsHB3uXZNar6PeI2C+BkUQH/0520RqM7Zb0ZEg4+6S6i+Z11Ddhkn+Sk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZOEP9uQiV1zH3a3aHqfWGEuJqzUo4rClu3BLMlWitr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNySjQgocwwOdUR7+1+vff+WJ7HHi2x7SZejx49o87M82KSvvvJ1bXTTeQ2yV4jf9DSKuJ6HcIHDr6bnAXEDEj8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDx6FoZ1mQHUkExUKBX3RXUJtaZVmdK+/kJ75+oWOFtIZlx0mZcdVNn/rW0Q++oQhtNRWXFfZrC6xkhCT1INz4AehTVQ2y9DTa6PxylfZKv4SS0yNLP/UkFFMiKtWgxzfnFYniRmVr6pgKNAsIxOlGQHtYY9MzvNCU0rfxVJQV1DM7am+c3mbsqlU0w7R+Tur5zDSLFdysQdDqAk4UqlqkgYagUBOhC/cnkuUNOyj3idOKJhFrz/mnkO3P/KrXcgMPfFtu+yx5rQNDNyoZV1bp+uPgP8kvQGe5ol/cbTEiXlZ5BEgYcKbky8H1ICbcoiG5YcmEMNOm8s88fxvf6dJpdeAmjmraoHZtKson2jeZ7NsYgsjNhwKEElcxzAfhnhK+IfalpZhHQxGypR/IPlQrLlJOrbyAEIyk40nASUHxlJrOXP1lA9dvLaG/3KkIa2sPwaIgdVhzpmyodJds2sMg6cngRljDGY1UBTYGyo8vNNILFoCzMPNDcNCyY9xWYz8M=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDQuz7VE0tTRnQJ96QrHIwmJh8osJY9A2+gmzkUlh54#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM7hnQz957+RtY0Mltzkw+lJRI4x2IlQwAuVKb+t24lorNdYqOmeiT8j8X9huVxPKGZSUxesKQ7YFrI9bxqNRo4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLkHp1/Qvor0RkXO+PvZvnJssDpVN93zM11quNN8iQ4KKQf8UHuKy+z84HXpOkzuxv1FNmR50SFPdR2h52T9/BEP+zzSmYli9cDaisI9zLQpghAnG+lXYjqsiPIXqR2z4IheTXQWRoc0c/9XzYCUMaMD73LVsv2ZTHG2Y7QfvK4MxYDPfGzTPihT0BaumTQQi1aKi5eILvXezyBhIgOrgWXDy73LvUS0A1PnwBTWjez2dmfEl2SozhpeqVRSmWdCZ8dRtXREfB6Mq/AC0SFrdQRYBB1fp6IKFrJhehXq8uN9YGQim7NDv95g1Vbg09hBzVMVRBut+meLFMgQicOFxX4cOH/zmBq2HZZ4NgoXQIttG2MWvRDeeOArcoiR4trg88CvXIKbHm7X3Xz124i1la6Znzd233vMLjW61sfm2BSiRvi2U199hCeHLpCKZDeXEfNKKws4/PCyJpilTrDhy01w/oqI6uKjCvuEpfNoDSqx4gfjAyjJboFWEV2ArMddk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAzNyDe1tBrOdz2+WL/pj9pc2M51PHCPiPpvoZYn4bHE#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBELYxft8jWfz1ywTUaPBtZwChEDFG53eKlkYcIDxgJP7KVnKVHGrkh7LMAVvlpn5gDq4gHPOx2/pvsvKR+u3AfU=#012 create=True mode=0644 path=/tmp/ansible.a3uax82c state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:39:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:59 np0005591760 python3.9[132373]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.a3uax82c' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:39:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:39:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:39:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:39:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:39:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:39:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060008590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:39:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 22 04:39:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/093959 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:40:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 22 04:40:00 np0005591760 python3.9[132528]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.a3uax82c state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:00 np0005591760 ceph-mon[74254]: overall HEALTH_OK
Jan 22 04:40:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:40:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:00 np0005591760 systemd[1]: session-44.scope: Deactivated successfully.
Jan 22 04:40:00 np0005591760 systemd[1]: session-44.scope: Consumed 3.376s CPU time.
Jan 22 04:40:00 np0005591760 systemd-logind[747]: Session 44 logged out. Waiting for processes to exit.
Jan 22 04:40:00 np0005591760 systemd-logind[747]: Removed session 44.
Jan 22 04:40:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:00.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c005600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:01.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:40:02 np0005591760 podman[132661]: 2026-01-22 09:40:02.439813724 +0000 UTC m=+0.049211214 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 04:40:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:02 np0005591760 podman[132678]: 2026-01-22 09:40:02.567879695 +0000 UTC m=+0.046331264 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:40:02 np0005591760 podman[132661]: 2026-01-22 09:40:02.571205467 +0000 UTC m=+0.180602957 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:40:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:02.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:02 np0005591760 podman[132760]: 2026-01-22 09:40:02.861319894 +0000 UTC m=+0.035290373 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:40:02 np0005591760 podman[132760]: 2026-01-22 09:40:02.869970648 +0000 UTC m=+0.043941106 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:40:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:03 np0005591760 podman[132844]: 2026-01-22 09:40:03.113230415 +0000 UTC m=+0.034340771 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:40:03 np0005591760 podman[132844]: 2026-01-22 09:40:03.134977948 +0000 UTC m=+0.056088284 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:40:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:40:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:03 np0005591760 podman[132903]: 2026-01-22 09:40:03.28381542 +0000 UTC m=+0.036569715 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:40:03 np0005591760 podman[132903]: 2026-01-22 09:40:03.409085578 +0000 UTC m=+0.161839873 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:40:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:03.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:03 np0005591760 podman[132959]: 2026-01-22 09:40:03.552566048 +0000 UTC m=+0.033673343 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:40:03 np0005591760 podman[132959]: 2026-01-22 09:40:03.563020173 +0000 UTC m=+0.044127448 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:40:03 np0005591760 podman[133011]: 2026-01-22 09:40:03.707812027 +0000 UTC m=+0.034550288 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=)
Jan 22 04:40:03 np0005591760 podman[133011]: 2026-01-22 09:40:03.717947651 +0000 UTC m=+0.044685902 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 22 04:40:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c005600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:03 np0005591760 podman[133062]: 2026-01-22 09:40:03.860403098 +0000 UTC m=+0.035854497 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:40:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:40:03 np0005591760 podman[133062]: 2026-01-22 09:40:03.883297031 +0000 UTC m=+0.058748440 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:40:03 np0005591760 podman[133110]: 2026-01-22 09:40:03.998095182 +0000 UTC m=+0.039110397 container exec d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:40:04 np0005591760 podman[133110]: 2026-01-22 09:40:04.006013654 +0000 UTC m=+0.047028869 container exec_died d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb05c005600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:40:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.022515776 +0000 UTC m=+0.026379178 container create ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:40:05 np0005591760 systemd[1]: Started libpod-conmon-ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2.scope.
Jan 22 04:40:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.074741535 +0000 UTC m=+0.078604928 container init ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_beaver, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.079188812 +0000 UTC m=+0.083052204 container start ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_beaver, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.080259622 +0000 UTC m=+0.084123014 container attach ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_beaver, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:40:05 np0005591760 flamboyant_beaver[133339]: 167 167
Jan 22 04:40:05 np0005591760 systemd[1]: libpod-ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2.scope: Deactivated successfully.
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.083155232 +0000 UTC m=+0.087018624 container died ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 22 04:40:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2ef843701480a70004c33f468baae155001447bb2b55963dc361a1eb6e4c6200-merged.mount: Deactivated successfully.
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.107297721 +0000 UTC m=+0.111161113 container remove ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:40:05 np0005591760 podman[133325]: 2026-01-22 09:40:05.011704237 +0000 UTC m=+0.015567650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:40:05 np0005591760 systemd[1]: libpod-conmon-ca91da5bc2183245d63cd7e085dd1a32933d264853eaacfb7080a14b000d19d2.scope: Deactivated successfully.
Jan 22 04:40:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:05 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:40:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:05 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:05 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.227842789 +0000 UTC m=+0.033472325 container create b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_brown, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Jan 22 04:40:05 np0005591760 systemd[1]: Started libpod-conmon-b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311.scope.
Jan 22 04:40:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:40:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8ea5469065c9e11fa35c4c14dd89bd664da26ba8a669007613a34fadd0604f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8ea5469065c9e11fa35c4c14dd89bd664da26ba8a669007613a34fadd0604f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8ea5469065c9e11fa35c4c14dd89bd664da26ba8a669007613a34fadd0604f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8ea5469065c9e11fa35c4c14dd89bd664da26ba8a669007613a34fadd0604f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8ea5469065c9e11fa35c4c14dd89bd664da26ba8a669007613a34fadd0604f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.286248883 +0000 UTC m=+0.091878420 container init b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_brown, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.291329193 +0000 UTC m=+0.096958729 container start b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.292523877 +0000 UTC m=+0.098153413 container attach b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.214229106 +0000 UTC m=+0.019858652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:40:05 np0005591760 stupefied_brown[133376]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:40:05 np0005591760 stupefied_brown[133376]: --> All data devices are unavailable
Jan 22 04:40:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:05.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:05 np0005591760 systemd[1]: libpod-b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311.scope: Deactivated successfully.
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.559567226 +0000 UTC m=+0.365196751 container died b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:40:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3b8ea5469065c9e11fa35c4c14dd89bd664da26ba8a669007613a34fadd0604f-merged.mount: Deactivated successfully.
Jan 22 04:40:05 np0005591760 podman[133363]: 2026-01-22 09:40:05.583653869 +0000 UTC m=+0.389283395 container remove b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:40:05 np0005591760 systemd[1]: libpod-conmon-b5a82629ef1af1626b868ce05ff7f0632d895ac71852e47077d824ce4f952311.scope: Deactivated successfully.
Jan 22 04:40:05 np0005591760 systemd-logind[747]: New session 45 of user zuul.
Jan 22 04:40:05 np0005591760 systemd[1]: Started Session 45 of User zuul.
Jan 22 04:40:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:06.000492747 +0000 UTC m=+0.026975202 container create faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:40:06 np0005591760 systemd[1]: Started libpod-conmon-faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da.scope.
Jan 22 04:40:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:06.050553423 +0000 UTC m=+0.077035898 container init faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:06.054985851 +0000 UTC m=+0.081468306 container start faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:06.056126232 +0000 UTC m=+0.082608687 container attach faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:40:06 np0005591760 quirky_thompson[133552]: 167 167
Jan 22 04:40:06 np0005591760 systemd[1]: libpod-faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da.scope: Deactivated successfully.
Jan 22 04:40:06 np0005591760 conmon[133552]: conmon faa0ebba416280c24413 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da.scope/container/memory.events
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:06.060042939 +0000 UTC m=+0.086525394 container died faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:40:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-70db6c6592dd66ea510ae171996c03e5ba84c15d5436ffe000ded403ed30959c-merged.mount: Deactivated successfully.
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:06.078544009 +0000 UTC m=+0.105026463 container remove faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:40:06 np0005591760 podman[133539]: 2026-01-22 09:40:05.99007987 +0000 UTC m=+0.016562345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:40:06 np0005591760 systemd[1]: libpod-conmon-faa0ebba416280c24413064a8c6b7f1c06edf7c2eb30c13248ed213ae3e761da.scope: Deactivated successfully.
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.198619371 +0000 UTC m=+0.032437884 container create b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:40:06 np0005591760 systemd[1]: Started libpod-conmon-b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30.scope.
Jan 22 04:40:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:40:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afba9d7c76067dd3cc693f31fe1f72cfb1f344dd0db3caa6fd31b5d7d3debcad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afba9d7c76067dd3cc693f31fe1f72cfb1f344dd0db3caa6fd31b5d7d3debcad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afba9d7c76067dd3cc693f31fe1f72cfb1f344dd0db3caa6fd31b5d7d3debcad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afba9d7c76067dd3cc693f31fe1f72cfb1f344dd0db3caa6fd31b5d7d3debcad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.251932299 +0000 UTC m=+0.085750812 container init b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.257427743 +0000 UTC m=+0.091246246 container start b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.258560209 +0000 UTC m=+0.092378731 container attach b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.188094322 +0000 UTC m=+0.021912845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:40:06 np0005591760 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]: {
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:    "0": [
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:        {
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "devices": [
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "/dev/loop3"
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            ],
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "lv_name": "ceph_lv0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "lv_size": "21470642176",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "name": "ceph_lv0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "tags": {
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.cluster_name": "ceph",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.crush_device_class": "",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.encrypted": "0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.osd_id": "0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.type": "block",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.vdo": "0",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:                "ceph.with_tpm": "0"
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            },
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "type": "block",
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:            "vg_name": "ceph_vg0"
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:        }
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]:    ]
Jan 22 04:40:06 np0005591760 pedantic_clarke[133659]: }
Jan 22 04:40:06 np0005591760 systemd[1]: libpod-b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30.scope: Deactivated successfully.
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.495512471 +0000 UTC m=+0.329330984 container died b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:40:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:06 np0005591760 python3.9[133690]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:40:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-afba9d7c76067dd3cc693f31fe1f72cfb1f344dd0db3caa6fd31b5d7d3debcad-merged.mount: Deactivated successfully.
Jan 22 04:40:06 np0005591760 podman[133621]: 2026-01-22 09:40:06.519143044 +0000 UTC m=+0.352961547 container remove b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:40:06 np0005591760 systemd[1]: libpod-conmon-b1aa7d3cb03f885d22770f780edb39a8c804ccc35b460be781a6121eef9c9c30.scope: Deactivated successfully.
Jan 22 04:40:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:06 np0005591760 podman[133869]: 2026-01-22 09:40:06.932983087 +0000 UTC m=+0.028062071 container create d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:40:06 np0005591760 systemd[1]: Started libpod-conmon-d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476.scope.
Jan 22 04:40:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:06.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:40:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:06.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:06 np0005591760 podman[133869]: 2026-01-22 09:40:06.98411869 +0000 UTC m=+0.079197694 container init d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 22 04:40:06 np0005591760 podman[133869]: 2026-01-22 09:40:06.988263667 +0000 UTC m=+0.083342661 container start d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_babbage, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:40:06 np0005591760 nice_babbage[133882]: 167 167
Jan 22 04:40:06 np0005591760 systemd[1]: libpod-d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476.scope: Deactivated successfully.
Jan 22 04:40:06 np0005591760 podman[133869]: 2026-01-22 09:40:06.993763708 +0000 UTC m=+0.088842702 container attach d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:40:06 np0005591760 podman[133869]: 2026-01-22 09:40:06.994265585 +0000 UTC m=+0.089344569 container died d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:40:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1e2c8012f9fdf20eab6770e9ff3fe4576b39eb0854dd3050d0a8e8f7e7e6a1c3-merged.mount: Deactivated successfully.
Jan 22 04:40:07 np0005591760 podman[133869]: 2026-01-22 09:40:07.012326976 +0000 UTC m=+0.107405960 container remove d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:40:07 np0005591760 podman[133869]: 2026-01-22 09:40:06.921190779 +0000 UTC m=+0.016269783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:40:07 np0005591760 systemd[1]: libpod-conmon-d808974955b1c721c11d4c4662002badaa24fa42bb253051ebf0663f8ecfd476.scope: Deactivated successfully.
Jan 22 04:40:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.128770178 +0000 UTC m=+0.029430612 container create da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:40:07 np0005591760 systemd[1]: Started libpod-conmon-da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b.scope.
Jan 22 04:40:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:40:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb60a9d5d391dc6f35a7b61213b9b352b1d8236d57518af06d2f35a2f92c4b4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb60a9d5d391dc6f35a7b61213b9b352b1d8236d57518af06d2f35a2f92c4b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb60a9d5d391dc6f35a7b61213b9b352b1d8236d57518af06d2f35a2f92c4b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb60a9d5d391dc6f35a7b61213b9b352b1d8236d57518af06d2f35a2f92c4b4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.187717713 +0000 UTC m=+0.088378168 container init da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lewin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.193527429 +0000 UTC m=+0.094187864 container start da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lewin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.194707254 +0000 UTC m=+0.095367688 container attach da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.116726967 +0000 UTC m=+0.017387421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:40:07 np0005591760 python3.9[133998]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 04:40:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:07.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:07] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:40:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:07] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:40:07 np0005591760 elastic_lewin[133941]: {}
Jan 22 04:40:07 np0005591760 lvm[134120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:40:07 np0005591760 lvm[134120]: VG ceph_vg0 finished
Jan 22 04:40:07 np0005591760 systemd[1]: libpod-da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b.scope: Deactivated successfully.
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.714531847 +0000 UTC m=+0.615192280 container died da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lewin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:40:07 np0005591760 lvm[134142]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:40:07 np0005591760 lvm[134142]: VG ceph_vg0 finished
Jan 22 04:40:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fb60a9d5d391dc6f35a7b61213b9b352b1d8236d57518af06d2f35a2f92c4b4a-merged.mount: Deactivated successfully.
Jan 22 04:40:07 np0005591760 podman[133905]: 2026-01-22 09:40:07.74293974 +0000 UTC m=+0.643600173 container remove da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lewin, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:40:07 np0005591760 systemd[1]: libpod-conmon-da37c2d4a8fa65065c11172eaf7dd299f60954678b9246f08969934c1265613b.scope: Deactivated successfully.
Jan 22 04:40:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:40:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:40:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:40:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 22 04:40:08 np0005591760 python3.9[134261]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:40:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:08 np0005591760 python3.9[134416]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:40:08 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:08 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:40:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:09 np0005591760 python3.9[134569]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:40:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:09.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 22 04:40:10 np0005591760 python3.9[134747]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:10 np0005591760 systemd[1]: session-45.scope: Deactivated successfully.
Jan 22 04:40:10 np0005591760 systemd[1]: session-45.scope: Consumed 2.750s CPU time.
Jan 22 04:40:10 np0005591760 systemd-logind[747]: Session 45 logged out. Waiting for processes to exit.
Jan 22 04:40:10 np0005591760 systemd-logind[747]: Removed session 45.
Jan 22 04:40:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:40:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:40:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094013 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:40:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:13.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:40:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:40:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:14.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:15.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:15 np0005591760 systemd-logind[747]: New session 46 of user zuul.
Jan 22 04:40:15 np0005591760 systemd[1]: Started Session 46 of User zuul.
Jan 22 04:40:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Jan 22 04:40:16 np0005591760 python3.9[134931]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:40:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:40:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:16.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:16.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:16.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:17 np0005591760 python3.9[135088]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:40:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:17] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:40:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:17] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:40:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:40:17 np0005591760 python3.9[135172]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 04:40:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:40:19 np0005591760 python3.9[135325]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:40:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:19.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:40:20 np0005591760 python3.9[135477]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 04:40:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:21 np0005591760 python3.9[135628]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:40:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:21 np0005591760 python3.9[135778]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:40:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:21.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:21 np0005591760 systemd[1]: session-46.scope: Deactivated successfully.
Jan 22 04:40:21 np0005591760 systemd[1]: session-46.scope: Consumed 4.266s CPU time.
Jan 22 04:40:21 np0005591760 systemd-logind[747]: Session 46 logged out. Waiting for processes to exit.
Jan 22 04:40:21 np0005591760 systemd-logind[747]: Removed session 46.
Jan 22 04:40:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:40:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:22 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:23.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 22 04:40:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060008590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:25.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 22 04:40:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:26 np0005591760 systemd-logind[747]: New session 47 of user zuul.
Jan 22 04:40:26 np0005591760 systemd[1]: Started Session 47 of User zuul.
Jan 22 04:40:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:26.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:26.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:27 np0005591760 python3.9[135962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:40:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:27.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:27] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Jan 22 04:40:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:27] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Jan 22 04:40:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:40:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:28 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:28 np0005591760 python3.9[136119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:40:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:28.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:40:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:29 np0005591760 python3.9[136272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:29.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:40:29 np0005591760 python3.9[136424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:30 np0005591760 python3.9[136573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074829.5668402-151-123050147510682/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b8c658212030584fbcbb2776c6dcfde44ab84692 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:30 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:40:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:30.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:40:30 np0005591760 python3.9[136726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:31 np0005591760 python3.9[136849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074830.5239422-151-247021474301123/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b941eed241a5b99f9369a04b2b65d73a34d75e07 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:31.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:31 np0005591760 python3.9[137001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:40:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:40:32 np0005591760 python3.9[137124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074831.3457925-151-148589747738914/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=365a72fdce75aae6ed6429fa5d725fd691db4082 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:32 np0005591760 python3.9[137277]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:32 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:32.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:32 np0005591760 python3.9[137430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:33 np0005591760 python3.9[137582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:40:34 np0005591760 python3.9[137705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074833.1081297-327-193992328874324/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=118e310a7a5262e05b1c80af53fafb4627335294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:34 np0005591760 python3.9[137858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:40:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:34 np0005591760 python3.9[137982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074834.1460516-327-68424126468388/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3dfb7d65ba619ae3fdfdd05ac78d95a034b5ef3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:35 np0005591760 python3.9[138134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:35 np0005591760 python3.9[138257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074835.0221546-327-94506673155655/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=806492e3f41b75b4b22311aa4e460f53f4aa8532 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:40:36 np0005591760 python3.9[138411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:36 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:36 np0005591760 python3.9[138565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:36.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:36.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:36.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:36.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064002630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:37 np0005591760 python3.9[138717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:37.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:37] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:40:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:37] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:40:37 np0005591760 python3.9[138840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074837.0024166-513-115647310970548/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=aa94ddb101bbb65e10941736c80c2e95176d90cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064002630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:40:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:40:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:38 np0005591760 python3.9[138993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:38 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:38 np0005591760 python3.9[139116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074837.8790114-513-99777072268158/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3dfb7d65ba619ae3fdfdd05ac78d95a034b5ef3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:38.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:39 np0005591760 python3.9[139269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094039 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:40:39 np0005591760 python3.9[139392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074838.759084-513-58401313207409/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=fd388773203af2a8a767b5da5bb6a6f44d208715 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:39.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070051860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:40:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0640038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:40 np0005591760 python3.9[139545]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:41 np0005591760 python3.9[139698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:41 np0005591760 python3.9[139821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074840.6919756-723-91305261875207/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:40:42 np0005591760 python3.9[139973]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:42 np0005591760 python3.9[140126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:42 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070051860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:42 np0005591760 python3.9[140250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074842.147412-799-261241720081508/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0640038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:43 np0005591760 python3.9[140402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:40:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094043 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:40:43 np0005591760 python3.9[140554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:44 np0005591760 python3.9[140678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074843.5981305-875-200437063478754/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:44 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:44.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:44 np0005591760 python3.9[140831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070051860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:45 np0005591760 python3.9[140983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:45.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:45 np0005591760 python3.9[141106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074844.9831903-949-68980185598307/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0640038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:40:46 np0005591760 python3.9[141259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:46 np0005591760 python3.9[141411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:46.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:46.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:46.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:46.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:46.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:47 np0005591760 python3.9[141535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074846.3101125-1020-204718357830309/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:47 np0005591760 python3.9[141687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:40:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:47.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:47] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:40:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:47] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:40:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:40:47 np0005591760 python3.9[141839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:40:48 np0005591760 python3.9[141963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074847.664794-1089-146669099822843/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d5973fd0d9f852bbc11b3ee817a5e73d7de1dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:48 np0005591760 systemd[1]: session-47.scope: Deactivated successfully.
Jan 22 04:40:48 np0005591760 systemd[1]: session-47.scope: Consumed 16.705s CPU time.
Jan 22 04:40:48 np0005591760 systemd-logind[747]: Session 47 logged out. Waiting for processes to exit.
Jan 22 04:40:48 np0005591760 systemd-logind[747]: Removed session 47.
Jan 22 04:40:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:48.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:40:49
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', '.mgr']
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:40:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:49.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070064830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:40:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094049 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:40:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:50 np0005591760 systemd[1]: session-18.scope: Deactivated successfully.
Jan 22 04:40:50 np0005591760 systemd[1]: session-18.scope: Consumed 1min 13.056s CPU time.
Jan 22 04:40:50 np0005591760 systemd-logind[747]: Session 18 logged out. Waiting for processes to exit.
Jan 22 04:40:50 np0005591760 systemd-logind[747]: Removed session 18.
Jan 22 04:40:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:50.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:40:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Jan 22 04:40:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0700648c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:53.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Jan 22 04:40:53 np0005591760 systemd-logind[747]: New session 48 of user zuul.
Jan 22 04:40:53 np0005591760 systemd[1]: Started Session 48 of User zuul.
Jan 22 04:40:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:54 np0005591760 python3.9[142174]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:40:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:54.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070064830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:55 np0005591760 python3.9[142327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:55.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:55 np0005591760 python3.9[142450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074854.7470043-57-143837895151142/.source.conf _original_basename=ceph.conf follow=False checksum=03d8d4124bbce310504894436c4a9612ab8c13f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c31c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 22 04:40:56 np0005591760 python3.9[142603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:40:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064004b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:56 np0005591760 python3.9[142726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074855.8921149-57-161515345882630/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=6b7917605681093964532d08a385bc3f0474a26c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:40:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:40:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:56.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:40:56 np0005591760 systemd-logind[747]: Session 48 logged out. Waiting for processes to exit.
Jan 22 04:40:56 np0005591760 systemd[1]: session-48.scope: Deactivated successfully.
Jan 22 04:40:56 np0005591760 systemd[1]: session-48.scope: Consumed 1.936s CPU time.
Jan 22 04:40:56 np0005591760 systemd-logind[747]: Removed session 48.
Jan 22 04:40:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:56.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:56.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:56.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:40:56.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:40:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064004b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:57.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:57] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:40:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:40:57] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:40:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:40:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064004b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 2 op/s
Jan 22 04:40:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:40:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:40:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:40:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:40:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:40:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:40:59.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:40:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:40:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:40:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 2 op/s
Jan 22 04:41:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:41:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:00.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:01.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c005040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:41:02 np0005591760 systemd-logind[747]: New session 49 of user zuul.
Jan 22 04:41:02 np0005591760 systemd[1]: Started Session 49 of User zuul.
Jan 22 04:41:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:02.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:02 np0005591760 python3.9[142913]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:41:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094103 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:41:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:41:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:41:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:03 np0005591760 python3.9[143069]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Jan 22 04:41:04 np0005591760 python3.9[143222]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:04 np0005591760 python3.9[143373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:41:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:04.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:05 np0005591760 python3.9[143525]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 04:41:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:05.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 22 04:41:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:41:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:06.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:06.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:06.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:06.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:06.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:07] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:41:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:07] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:41:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:07 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 22 04:41:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:07 np0005591760 python3.9[143684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:41:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:41:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:41:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:08 np0005591760 python3.9[143835]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:41:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:41:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:08.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.909919223 +0000 UTC m=+0.026943537 container create eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:41:08 np0005591760 systemd[1]: Started libpod-conmon-eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed.scope.
Jan 22 04:41:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.968213668 +0000 UTC m=+0.085237981 container init eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_dubinsky, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.973232052 +0000 UTC m=+0.090256365 container start eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.974856305 +0000 UTC m=+0.091880618 container attach eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_dubinsky, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:41:08 np0005591760 tender_dubinsky[143946]: 167 167
Jan 22 04:41:08 np0005591760 systemd[1]: libpod-eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed.scope: Deactivated successfully.
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.977524465 +0000 UTC m=+0.094548799 container died eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:41:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a059d5f1e8c4f6deab15f77ec4d1dbfbc424a05970773c76d362f5639ece7a1c-merged.mount: Deactivated successfully.
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.89872224 +0000 UTC m=+0.015746573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:41:08 np0005591760 podman[143933]: 2026-01-22 09:41:08.997565838 +0000 UTC m=+0.114590152 container remove eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:41:09 np0005591760 systemd[1]: libpod-conmon-eb79ed759dc5af5150f2e4dabc442415213ae3a6383b5a10305f14af400400ed.scope: Deactivated successfully.
Jan 22 04:41:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.113755004 +0000 UTC m=+0.030064694 container create 762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:41:09 np0005591760 systemd[1]: Started libpod-conmon-762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333.scope.
Jan 22 04:41:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a0215b451b02b5fe94eb4d47ebd6dce9aa90f6bd0deb23d7d8b044e16ab852/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a0215b451b02b5fe94eb4d47ebd6dce9aa90f6bd0deb23d7d8b044e16ab852/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a0215b451b02b5fe94eb4d47ebd6dce9aa90f6bd0deb23d7d8b044e16ab852/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a0215b451b02b5fe94eb4d47ebd6dce9aa90f6bd0deb23d7d8b044e16ab852/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a0215b451b02b5fe94eb4d47ebd6dce9aa90f6bd0deb23d7d8b044e16ab852/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.165517262 +0000 UTC m=+0.081826972 container init 762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.171230815 +0000 UTC m=+0.087540505 container start 762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.172360014 +0000 UTC m=+0.088669704 container attach 762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.101653265 +0000 UTC m=+0.017962974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:41:09 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:41:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:09 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:09 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:41:09 np0005591760 kind_robinson[143982]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:41:09 np0005591760 kind_robinson[143982]: --> All data devices are unavailable
Jan 22 04:41:09 np0005591760 systemd[1]: libpod-762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333.scope: Deactivated successfully.
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.446192196 +0000 UTC m=+0.362501896 container died 762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 04:41:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-28a0215b451b02b5fe94eb4d47ebd6dce9aa90f6bd0deb23d7d8b044e16ab852-merged.mount: Deactivated successfully.
Jan 22 04:41:09 np0005591760 podman[143968]: 2026-01-22 09:41:09.469541506 +0000 UTC m=+0.385851195 container remove 762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:41:09 np0005591760 systemd[1]: libpod-conmon-762ab5d4976ce38c037595a9910c7dd55d340ee6a627452981eff20658a3d333.scope: Deactivated successfully.
Jan 22 04:41:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:09.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.881828308 +0000 UTC m=+0.027522921 container create d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:41:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:41:09 np0005591760 systemd[1]: Started libpod-conmon-d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91.scope.
Jan 22 04:41:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.935369289 +0000 UTC m=+0.081063892 container init d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.940210948 +0000 UTC m=+0.085905552 container start d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chaum, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.942796996 +0000 UTC m=+0.088491609 container attach d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chaum, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:41:09 np0005591760 sad_chaum[144178]: 167 167
Jan 22 04:41:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094109 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:41:09 np0005591760 systemd[1]: libpod-d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91.scope: Deactivated successfully.
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.94401906 +0000 UTC m=+0.089713662 container died d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chaum, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:41:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-aedcd5a34d8035c4d8f4d7539d53a33bbbfdaa0ad138eeccae1e93a64d79b966-merged.mount: Deactivated successfully.
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.96483926 +0000 UTC m=+0.110533863 container remove d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:41:09 np0005591760 podman[144164]: 2026-01-22 09:41:09.870116423 +0000 UTC m=+0.015811046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:41:09 np0005591760 systemd[1]: libpod-conmon-d0a1ce61292ad7a40cf5bc4bb3ff0a1dea073f924026d7445ec543cf01b1cd91.scope: Deactivated successfully.
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.088586128 +0000 UTC m=+0.036109232 container create 0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:41:10 np0005591760 systemd[1]: Started libpod-conmon-0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923.scope.
Jan 22 04:41:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97cb0a4cd3440c151c5235009728b76a43f13e2d4a76531daf19ffb1cb9d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97cb0a4cd3440c151c5235009728b76a43f13e2d4a76531daf19ffb1cb9d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97cb0a4cd3440c151c5235009728b76a43f13e2d4a76531daf19ffb1cb9d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97cb0a4cd3440c151c5235009728b76a43f13e2d4a76531daf19ffb1cb9d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.157097659 +0000 UTC m=+0.104620773 container init 0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.161805617 +0000 UTC m=+0.109328720 container start 0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.074357878 +0000 UTC m=+0.021881002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.168219191 +0000 UTC m=+0.115742315 container attach 0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:41:10 np0005591760 python3.9[144315]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:41:10 np0005591760 bold_mendel[144310]: {
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:    "0": [
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:        {
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "devices": [
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "/dev/loop3"
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            ],
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "lv_name": "ceph_lv0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "lv_size": "21470642176",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "name": "ceph_lv0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "tags": {
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.cluster_name": "ceph",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.crush_device_class": "",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.encrypted": "0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.osd_id": "0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.type": "block",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.vdo": "0",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:                "ceph.with_tpm": "0"
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            },
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "type": "block",
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:            "vg_name": "ceph_vg0"
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:        }
Jan 22 04:41:10 np0005591760 bold_mendel[144310]:    ]
Jan 22 04:41:10 np0005591760 bold_mendel[144310]: }
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.430010186 +0000 UTC m=+0.377533290 container died 0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:41:10 np0005591760 systemd[1]: libpod-0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923.scope: Deactivated successfully.
Jan 22 04:41:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7fb97cb0a4cd3440c151c5235009728b76a43f13e2d4a76531daf19ffb1cb9d1-merged.mount: Deactivated successfully.
Jan 22 04:41:10 np0005591760 podman[144247]: 2026-01-22 09:41:10.451933387 +0000 UTC m=+0.399456490 container remove 0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:41:10 np0005591760 systemd[1]: libpod-conmon-0e940e22b2e24810fb56d5cde698ca4d826c1d8c1edaae2456043f3b292e3923.scope: Deactivated successfully.
Jan 22 04:41:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.866463319 +0000 UTC m=+0.027252371 container create 9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:41:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:10.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:10 np0005591760 systemd[1]: Started libpod-conmon-9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d.scope.
Jan 22 04:41:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.924364351 +0000 UTC m=+0.085153413 container init 9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gould, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.929357147 +0000 UTC m=+0.090146199 container start 9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:41:10 np0005591760 dazzling_gould[144555]: 167 167
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.933227445 +0000 UTC m=+0.094016498 container attach 9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gould, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:41:10 np0005591760 systemd[1]: libpod-9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d.scope: Deactivated successfully.
Jan 22 04:41:10 np0005591760 conmon[144555]: conmon 9fa7152ec1e3081c2cb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d.scope/container/memory.events
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.934332198 +0000 UTC m=+0.095121250 container died 9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:41:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-009d3d400dea670e021dbefee90fdc04465ee1da7d40d37241affa4f668d5ad4-merged.mount: Deactivated successfully.
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.855309015 +0000 UTC m=+0.016098087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:41:10 np0005591760 podman[144517]: 2026-01-22 09:41:10.953441502 +0000 UTC m=+0.114230554 container remove 9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:41:10 np0005591760 systemd[1]: libpod-conmon-9fa7152ec1e3081c2cb31e36bb08c75a98735bdde3605237eb7d38056efce96d.scope: Deactivated successfully.
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.074979014 +0000 UTC m=+0.032223726 container create e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chebyshev, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:41:11 np0005591760 systemd[1]: Started libpod-conmon-e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88.scope.
Jan 22 04:41:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:11 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cee28dd62c68d5d3064f650d71c5c9428c918dfb6bf1041f0d434a2b60ca23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cee28dd62c68d5d3064f650d71c5c9428c918dfb6bf1041f0d434a2b60ca23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cee28dd62c68d5d3064f650d71c5c9428c918dfb6bf1041f0d434a2b60ca23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cee28dd62c68d5d3064f650d71c5c9428c918dfb6bf1041f0d434a2b60ca23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:11 np0005591760 python3[144590]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.134486666 +0000 UTC m=+0.091731398 container init e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.139575753 +0000 UTC m=+0.096820465 container start e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chebyshev, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.14687725 +0000 UTC m=+0.104121963 container attach e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chebyshev, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.061888108 +0000 UTC m=+0.019132820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:41:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094111 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:41:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:11.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:11 np0005591760 beautiful_chebyshev[144620]: {}
Jan 22 04:41:11 np0005591760 lvm[144849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:41:11 np0005591760 lvm[144849]: VG ceph_vg0 finished
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.649305913 +0000 UTC m=+0.606550615 container died e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:41:11 np0005591760 systemd[1]: libpod-e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88.scope: Deactivated successfully.
Jan 22 04:41:11 np0005591760 systemd[1]: var-lib-containers-storage-overlay-11cee28dd62c68d5d3064f650d71c5c9428c918dfb6bf1041f0d434a2b60ca23-merged.mount: Deactivated successfully.
Jan 22 04:41:11 np0005591760 podman[144607]: 2026-01-22 09:41:11.677519945 +0000 UTC m=+0.634764657 container remove e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chebyshev, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:41:11 np0005591760 systemd[1]: libpod-conmon-e6fb83868b89ff9f7d581ca1a884f0c83c960b0736462112c8bc29ef2a52cd88.scope: Deactivated successfully.
Jan 22 04:41:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:41:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:41:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:11 np0005591760 python3.9[144844]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:41:12 np0005591760 python3.9[145038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:41:12 np0005591760 python3.9[145116]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:41:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:12.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:41:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:13 np0005591760 python3.9[145269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:13 np0005591760 python3.9[145347]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6jrsfbra recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:13.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Jan 22 04:41:14 np0005591760 python3.9[145499]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:14 np0005591760 python3.9[145578]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:14.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:14 np0005591760 python3.9[145731]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:15.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:15 np0005591760 python3[145884]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 04:41:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Jan 22 04:41:16 np0005591760 python3.9[146037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:16 np0005591760 python3.9[146162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074875.772316-426-268329740393968/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:16.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:17 np0005591760 python3.9[146315]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:17 np0005591760 python3.9[146440]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074876.7911644-471-237002451943361/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:17] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:41:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:17] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:41:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:17.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:41:18 np0005591760 python3.9[146593]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:18 np0005591760 python3.9[146718]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074877.7464752-516-7908284934756/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:19 np0005591760 python3.9[146871]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:41:19 np0005591760 python3.9[146996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074878.683711-561-136932907150992/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:19.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:41:20 np0005591760 python3.9[147149]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:20 np0005591760 python3.9[147274]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769074879.727027-606-120809480857145/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:41:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:41:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:20.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:41:21 np0005591760 python3.9[147427]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:21 np0005591760 python3.9[147579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:21.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:22 np0005591760 python3.9[147735]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:22 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:22 np0005591760 python3.9[147887]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:22.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:23 np0005591760 python3.9[148041]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:41:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:41:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:41:23 np0005591760 python3.9[148195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:24 np0005591760 python3.9[148351]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:24.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:25 np0005591760 python3.9[148502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:41:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:25.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:41:26 np0005591760 python3.9[148656]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:26 np0005591760 ovs-vsctl[148657]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 22 04:41:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064005a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:41:26 np0005591760 python3.9[148810]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:26.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:26.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:26.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:26.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:26.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540c3250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:27 np0005591760 python3.9[148965]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:27 np0005591760 ovs-vsctl[148966]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 22 04:41:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:27] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 22 04:41:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:27] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 22 04:41:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:27.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:27 np0005591760 python3.9[149116]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:41:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:41:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:28 np0005591760 python3.9[149274]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:28 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:28.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:29 np0005591760 python3.9[149427]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:29 np0005591760 python3.9[149505]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:29.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:29 np0005591760 python3.9[149657]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:41:30 np0005591760 python3.9[149736]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:30 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:30 np0005591760 python3.9[149913]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:30.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:31 np0005591760 python3.9[150066]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:31 np0005591760 python3.9[150144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:31.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:41:32 np0005591760 python3.9[150296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:32 np0005591760 python3.9[150375]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:32 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:32 np0005591760 python3.9[150528]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:41:32 np0005591760 systemd[1]: Reloading.
Jan 22 04:41:33 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:41:33 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:41:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094133 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:41:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:33.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:33 np0005591760 python3.9[150716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.017194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074894017233, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 251, "total_data_size": 4274741, "memory_usage": 4353280, "flush_reason": "Manual Compaction"}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074894026300, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 4149939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10597, "largest_seqno": 12952, "table_properties": {"data_size": 4139173, "index_size": 6812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24357, "raw_average_key_size": 21, "raw_value_size": 4116738, "raw_average_value_size": 3579, "num_data_blocks": 297, "num_entries": 1150, "num_filter_entries": 1150, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074702, "oldest_key_time": 1769074702, "file_creation_time": 1769074894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 9130 microseconds, and 6579 cpu microseconds.
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.026328) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 4149939 bytes OK
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.026348) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.027416) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.027426) EVENT_LOG_v1 {"time_micros": 1769074894027424, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.027439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4264730, prev total WAL file size 4264730, number of live WAL files 2.
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.028086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(4052KB)], [26(13MB)]
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074894028128, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 18279682, "oldest_snapshot_seqno": -1}
Jan 22 04:41:34 np0005591760 python3.9[150794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4541 keys, 15426077 bytes, temperature: kUnknown
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074894063587, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 15426077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15388731, "index_size": 24884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 114588, "raw_average_key_size": 25, "raw_value_size": 15299002, "raw_average_value_size": 3369, "num_data_blocks": 1064, "num_entries": 4541, "num_filter_entries": 4541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769074894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.063711) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 15426077 bytes
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.064207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 514.9 rd, 434.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 13.5 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 5061, records dropped: 520 output_compression: NoCompression
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.064220) EVENT_LOG_v1 {"time_micros": 1769074894064215, "job": 10, "event": "compaction_finished", "compaction_time_micros": 35503, "compaction_time_cpu_micros": 21758, "output_level": 6, "num_output_files": 1, "total_output_size": 15426077, "num_input_records": 5061, "num_output_records": 4541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074894064677, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074894066084, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.028039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.066107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.066110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.066112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.066113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:34 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:34.066114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:34 np0005591760 python3.9[150947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:34 np0005591760 python3.9[151026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:35 np0005591760 python3.9[151178]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:41:35 np0005591760 systemd[1]: Reloading.
Jan 22 04:41:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:35.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:35 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:41:35 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:41:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:35 np0005591760 systemd[1]: Starting Create netns directory...
Jan 22 04:41:35 np0005591760 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 04:41:35 np0005591760 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 04:41:35 np0005591760 systemd[1]: Finished Create netns directory.
Jan 22 04:41:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:41:36 np0005591760 python3.9[151372]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:36 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:36.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:36.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:36.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:36.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:37 np0005591760 python3.9[151525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:37 np0005591760 python3.9[151648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074896.7045753-1359-245971770981006/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:37] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Jan 22 04:41:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:37] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Jan 22 04:41:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:38 np0005591760 python3.9[151801]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:38 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:38 np0005591760 python3.9[151954]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:41:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:39 np0005591760 python3.9[152106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:41:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:39.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:41:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:39 np0005591760 python3.9[152229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074899.0815022-1458-130759186447529/.source.json _original_basename=.ckdqfn_p follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:40 np0005591760 python3.9[152380]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:40.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:41.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:42 np0005591760 python3.9[152804]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 22 04:41:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:42 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:42 np0005591760 python3.9[152958]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 04:41:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:43.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:43 np0005591760 python3[153110]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 04:41:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:41:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094143 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:41:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:44 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:45.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:41:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:46.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:46.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040002f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:47] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Jan 22 04:41:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:47] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Jan 22 04:41:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:47.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.241211) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074908241246, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 357, "num_deletes": 251, "total_data_size": 253600, "memory_usage": 260064, "flush_reason": "Manual Compaction"}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074908242442, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 235259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12953, "largest_seqno": 13309, "table_properties": {"data_size": 233057, "index_size": 366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5607, "raw_average_key_size": 19, "raw_value_size": 228769, "raw_average_value_size": 775, "num_data_blocks": 16, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074895, "oldest_key_time": 1769074895, "file_creation_time": 1769074908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 1252 microseconds, and 856 cpu microseconds.
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.242466) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 235259 bytes OK
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.242485) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.242819) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.242837) EVENT_LOG_v1 {"time_micros": 1769074908242833, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.242847) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 251234, prev total WAL file size 251234, number of live WAL files 2.
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.243184) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(229KB)], [29(14MB)]
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074908243220, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15661336, "oldest_snapshot_seqno": -1}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4327 keys, 12172430 bytes, temperature: kUnknown
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074908276012, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12172430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12140351, "index_size": 20129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 110672, "raw_average_key_size": 25, "raw_value_size": 12058126, "raw_average_value_size": 2786, "num_data_blocks": 858, "num_entries": 4327, "num_filter_entries": 4327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769074908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.276179) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12172430 bytes
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.276597) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 476.7 rd, 370.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 14.7 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(118.3) write-amplify(51.7) OK, records in: 4836, records dropped: 509 output_compression: NoCompression
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.276611) EVENT_LOG_v1 {"time_micros": 1769074908276604, "job": 12, "event": "compaction_finished", "compaction_time_micros": 32855, "compaction_time_cpu_micros": 19692, "output_level": 6, "num_output_files": 1, "total_output_size": 12172430, "num_input_records": 4836, "num_output_records": 4327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074908276838, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769074908278426, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.243140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.278481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.278484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.278485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.278486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:48 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:41:48.278487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:41:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:48 np0005591760 podman[153121]: 2026-01-22 09:41:48.928275505 +0000 UTC m=+5.099617359 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 22 04:41:49 np0005591760 podman[153243]: 2026-01-22 09:41:49.027225594 +0000 UTC m=+0.031069899 container create b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 22 04:41:49 np0005591760 podman[153243]: 2026-01-22 09:41:49.012791336 +0000 UTC m=+0.016635641 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 22 04:41:49 np0005591760 python3[153110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 22 04:41:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:41:49
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', 'default.rgw.meta']
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:41:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:41:50 np0005591760 python3.9[153448]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:41:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:50.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:50 np0005591760 python3.9[153603]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:51 np0005591760 python3.9[153679]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:41:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:41:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:51.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:41:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:51 np0005591760 python3.9[153830]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074911.5343783-1692-23614635321895/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:52 np0005591760 python3.9[153907]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:41:52 np0005591760 systemd[1]: Reloading.
Jan 22 04:41:52 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:41:52 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:41:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:41:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:52.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:53 np0005591760 python3.9[154019]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:41:53 np0005591760 systemd[1]: Reloading.
Jan 22 04:41:53 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:41:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:53 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:41:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:53 np0005591760 systemd[1]: Starting ovn_controller container...
Jan 22 04:41:53 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:41:53 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420b30a247faf9e1f72f4e3367e3306dc6e485c1e1eceadcfe06d26792cd21b6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 22 04:41:53 np0005591760 systemd[1]: Started /usr/bin/podman healthcheck run b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a.
Jan 22 04:41:53 np0005591760 podman[154060]: 2026-01-22 09:41:53.405964053 +0000 UTC m=+0.073739440 container init b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller)
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + sudo -E kolla_set_configs
Jan 22 04:41:53 np0005591760 podman[154060]: 2026-01-22 09:41:53.430220794 +0000 UTC m=+0.097996161 container start b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 04:41:53 np0005591760 edpm-start-podman-container[154060]: ovn_controller
Jan 22 04:41:53 np0005591760 systemd[1]: Created slice User Slice of UID 0.
Jan 22 04:41:53 np0005591760 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 22 04:41:53 np0005591760 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 22 04:41:53 np0005591760 systemd[1]: Starting User Manager for UID 0...
Jan 22 04:41:53 np0005591760 edpm-start-podman-container[154059]: Creating additional drop-in dependency for "ovn_controller" (b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a)
Jan 22 04:41:53 np0005591760 podman[154080]: 2026-01-22 09:41:53.491364933 +0000 UTC m=+0.052366338 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:41:53 np0005591760 systemd[1]: b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a-3c249f2175a11617.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 04:41:53 np0005591760 systemd[1]: b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a-3c249f2175a11617.service: Failed with result 'exit-code'.
Jan 22 04:41:53 np0005591760 systemd[1]: Reloading.
Jan 22 04:41:53 np0005591760 systemd[154100]: Queued start job for default target Main User Target.
Jan 22 04:41:53 np0005591760 systemd[154100]: Created slice User Application Slice.
Jan 22 04:41:53 np0005591760 systemd[154100]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 22 04:41:53 np0005591760 systemd[154100]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 04:41:53 np0005591760 systemd[154100]: Reached target Paths.
Jan 22 04:41:53 np0005591760 systemd[154100]: Reached target Timers.
Jan 22 04:41:53 np0005591760 systemd[154100]: Starting D-Bus User Message Bus Socket...
Jan 22 04:41:53 np0005591760 systemd[154100]: Starting Create User's Volatile Files and Directories...
Jan 22 04:41:53 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:41:53 np0005591760 systemd[154100]: Listening on D-Bus User Message Bus Socket.
Jan 22 04:41:53 np0005591760 systemd[154100]: Reached target Sockets.
Jan 22 04:41:53 np0005591760 systemd[154100]: Finished Create User's Volatile Files and Directories.
Jan 22 04:41:53 np0005591760 systemd[154100]: Reached target Basic System.
Jan 22 04:41:53 np0005591760 systemd[154100]: Reached target Main User Target.
Jan 22 04:41:53 np0005591760 systemd[154100]: Startup finished in 105ms.
Jan 22 04:41:53 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:41:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:53.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:53 np0005591760 systemd[1]: Started User Manager for UID 0.
Jan 22 04:41:53 np0005591760 systemd[1]: Started ovn_controller container.
Jan 22 04:41:53 np0005591760 systemd[1]: Started Session c1 of User root.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: INFO:__main__:Validating config file
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: INFO:__main__:Writing out command to execute
Jan 22 04:41:53 np0005591760 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: ++ cat /run_command
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + ARGS=
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + sudo kolla_copy_cacerts
Jan 22 04:41:53 np0005591760 systemd[1]: Started Session c2 of User root.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + [[ ! -n '' ]]
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + . kolla_extend_start
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + umask 0022
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 22 04:41:53 np0005591760 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8386] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8390] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <warn>  [1769074913.8391] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8395] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8398] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8400] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 04:41:53 np0005591760 kernel: br-int: entered promiscuous mode
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 04:41:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 04:41:53 np0005591760 ovn_controller[154073]: 2026-01-22T09:41:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8558] manager: (ovn-09ce5d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 22 04:41:53 np0005591760 systemd-udevd[154200]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8668] manager: (ovn-61e048-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 22 04:41:53 np0005591760 systemd-udevd[154203]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:41:53 np0005591760 kernel: genev_sys_6081: entered promiscuous mode
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8695] device (genev_sys_6081): carrier: link connected
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.8697] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Jan 22 04:41:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:41:53 np0005591760 NetworkManager[48920]: <info>  [1769074913.9924] manager: (ovn-eb0238-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 22 04:41:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:54 np0005591760 python3.9[154331]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 04:41:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:55.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:55 np0005591760 python3.9[154484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:41:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:41:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:41:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:41:56 np0005591760 python3.9[154608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074915.2138317-1827-218311724528554/.source.yaml _original_basename=.6w8f_kib follow=False checksum=30769c46b1c73c629551b9176b18950dfb75be0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:41:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:56 np0005591760 python3.9[154760]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:56 np0005591760 ovs-vsctl[154762]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 22 04:41:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:56.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:41:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:41:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:57 np0005591760 python3.9[154914]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:57 np0005591760 ovs-vsctl[154916]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 22 04:41:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:57] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:41:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:41:57] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:41:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:57.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040004f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:41:58 np0005591760 python3.9[155070]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:41:58 np0005591760 ovs-vsctl[155071]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 22 04:41:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:41:58 np0005591760 systemd-logind[747]: Session 49 logged out. Waiting for processes to exit.
Jan 22 04:41:58 np0005591760 systemd[1]: session-49.scope: Deactivated successfully.
Jan 22 04:41:58 np0005591760 systemd[1]: session-49.scope: Consumed 41.493s CPU time.
Jan 22 04:41:58 np0005591760 systemd-logind[747]: Removed session 49.
Jan 22 04:41:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:41:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:41:58.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:41:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:41:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:41:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:41:59.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:41:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:41:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:41:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:42:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:00.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:01.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:42:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c004fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:02.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:03 np0005591760 systemd-logind[747]: New session 51 of user zuul.
Jan 22 04:42:03 np0005591760 systemd[1]: Started Session 51 of User zuul.
Jan 22 04:42:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:03.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:42:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094203 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:42:03 np0005591760 systemd[1]: Stopping User Manager for UID 0...
Jan 22 04:42:03 np0005591760 systemd[154100]: Activating special unit Exit the Session...
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped target Main User Target.
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped target Basic System.
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped target Paths.
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped target Sockets.
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped target Timers.
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 04:42:03 np0005591760 systemd[154100]: Closed D-Bus User Message Bus Socket.
Jan 22 04:42:03 np0005591760 systemd[154100]: Stopped Create User's Volatile Files and Directories.
Jan 22 04:42:03 np0005591760 systemd[154100]: Removed slice User Application Slice.
Jan 22 04:42:03 np0005591760 systemd[154100]: Reached target Shutdown.
Jan 22 04:42:03 np0005591760 systemd[154100]: Finished Exit the Session.
Jan 22 04:42:03 np0005591760 systemd[154100]: Reached target Exit the Session.
Jan 22 04:42:04 np0005591760 systemd[1]: user@0.service: Deactivated successfully.
Jan 22 04:42:04 np0005591760 systemd[1]: Stopped User Manager for UID 0.
Jan 22 04:42:04 np0005591760 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 22 04:42:04 np0005591760 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 22 04:42:04 np0005591760 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 22 04:42:04 np0005591760 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 22 04:42:04 np0005591760 systemd[1]: Removed slice User Slice of UID 0.
Jan 22 04:42:04 np0005591760 python3.9[155258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:42:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:04.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:05 np0005591760 python3.9[155415]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 04:42:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:05.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 04:42:05 np0005591760 python3.9[155567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:42:06 np0005591760 python3.9[155720]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:06 np0005591760 python3.9[155872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:06.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:06.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:06.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:06.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:06.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:07 np0005591760 python3.9[156025]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:07] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:42:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:07] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:42:07 np0005591760 python3.9[156175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:42:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:07.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:42:08 np0005591760 python3.9[156328]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 04:42:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:08.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:09 np0005591760 python3.9[156479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094209 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:42:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:09.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:09 np0005591760 python3.9[156600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074928.870121-213-71157570555887/.source follow=False _original_basename=haproxy.j2 checksum=1daf285be4abb25cbd7ba376734de140aac9aefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0740adab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:42:10 np0005591760 python3.9[156751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:10 np0005591760 python3.9[156897]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074929.9136062-258-121847378258973/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 04:42:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 04:42:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:11 np0005591760 python3.9[157050]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:42:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 04:42:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:11.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 04:42:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:42:11 np0005591760 python3.9[157136]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:42:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:42:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:42:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.842175748 +0000 UTC m=+0.028520689 container create d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_elgamal, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:42:12 np0005591760 systemd[1]: Started libpod-conmon-d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156.scope.
Jan 22 04:42:12 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.888599918 +0000 UTC m=+0.074944858 container init d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.893905485 +0000 UTC m=+0.080250425 container start d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.895054644 +0000 UTC m=+0.081399584 container attach d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:42:12 np0005591760 dreamy_elgamal[157314]: 167 167
Jan 22 04:42:12 np0005591760 systemd[1]: libpod-d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156.scope: Deactivated successfully.
Jan 22 04:42:12 np0005591760 conmon[157314]: conmon d2abd5ecd909314e37c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156.scope/container/memory.events
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.898820815 +0000 UTC m=+0.085165775 container died d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:42:12 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e8dc16148de6444bf34474f73b26513e667bae7485107f46343ef20099a346b8-merged.mount: Deactivated successfully.
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.919219127 +0000 UTC m=+0.105564067 container remove d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:42:12 np0005591760 podman[157301]: 2026-01-22 09:42:12.831191731 +0000 UTC m=+0.017536681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:42:12 np0005591760 systemd[1]: libpod-conmon-d2abd5ecd909314e37c74166666235eb316a219690629ea1d22c44d939754156.scope: Deactivated successfully.
Jan 22 04:42:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.040585628 +0000 UTC m=+0.031982495 container create 9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_black, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:42:13 np0005591760 systemd[1]: Started libpod-conmon-9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651.scope.
Jan 22 04:42:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc183d36046c1a5ce5944796ca5300a080b2f0657e48faa4ca49b2620fbfe88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc183d36046c1a5ce5944796ca5300a080b2f0657e48faa4ca49b2620fbfe88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc183d36046c1a5ce5944796ca5300a080b2f0657e48faa4ca49b2620fbfe88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc183d36046c1a5ce5944796ca5300a080b2f0657e48faa4ca49b2620fbfe88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc183d36046c1a5ce5944796ca5300a080b2f0657e48faa4ca49b2620fbfe88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.084620717 +0000 UTC m=+0.076017603 container init 9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_black, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.090263491 +0000 UTC m=+0.081660367 container start 9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_black, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.091644477 +0000 UTC m=+0.083041354 container attach 9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.02925704 +0000 UTC m=+0.020653936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:42:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900bfc80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:13 np0005591760 competent_black[157372]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:42:13 np0005591760 competent_black[157372]: --> All data devices are unavailable
Jan 22 04:42:13 np0005591760 systemd[1]: libpod-9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651.scope: Deactivated successfully.
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.360846153 +0000 UTC m=+0.352243039 container died 9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:42:13 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2dc183d36046c1a5ce5944796ca5300a080b2f0657e48faa4ca49b2620fbfe88-merged.mount: Deactivated successfully.
Jan 22 04:42:13 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:42:13 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:13 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:13 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:42:13 np0005591760 podman[157336]: 2026-01-22 09:42:13.386384881 +0000 UTC m=+0.377781756 container remove 9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:42:13 np0005591760 systemd[1]: libpod-conmon-9f7d7b98a07d249ef87a0f5c03e2e141f0d1f6f974a4077f6f8500be277e7651.scope: Deactivated successfully.
Jan 22 04:42:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:13.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:13 np0005591760 python3.9[157574]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.809248419 +0000 UTC m=+0.032181871 container create e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:42:13 np0005591760 systemd[1]: Started libpod-conmon-e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0.scope.
Jan 22 04:42:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.865445049 +0000 UTC m=+0.088378500 container init e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:42:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.874208976 +0000 UTC m=+0.097142427 container start e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_aryabhata, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.875611774 +0000 UTC m=+0.098545225 container attach e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_aryabhata, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Jan 22 04:42:13 np0005591760 vibrant_aryabhata[157624]: 167 167
Jan 22 04:42:13 np0005591760 systemd[1]: libpod-e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0.scope: Deactivated successfully.
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.877536458 +0000 UTC m=+0.100469908 container died e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_aryabhata, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:42:13 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c0f2fe42b911dbb04fee71f381541a75ef11505a1a0d2d9ebc08c1853bc2e71e-merged.mount: Deactivated successfully.
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.797054297 +0000 UTC m=+0.019987769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:42:13 np0005591760 podman[157607]: 2026-01-22 09:42:13.89889886 +0000 UTC m=+0.121832311 container remove e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:42:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:42:13 np0005591760 systemd[1]: libpod-conmon-e2acc44907d7ba13389ddc10add41ca6d5a1b3ab0dc0721e061f2009a9ff8ee0.scope: Deactivated successfully.
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.021504019 +0000 UTC m=+0.027859520 container create b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_mendel, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:42:14 np0005591760 systemd[1]: Started libpod-conmon-b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc.scope.
Jan 22 04:42:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e796f5aa80c48d473ea82ddf7f9ed87b5894b578f11fa3065464d0a7753070e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e796f5aa80c48d473ea82ddf7f9ed87b5894b578f11fa3065464d0a7753070e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e796f5aa80c48d473ea82ddf7f9ed87b5894b578f11fa3065464d0a7753070e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e796f5aa80c48d473ea82ddf7f9ed87b5894b578f11fa3065464d0a7753070e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.076870011 +0000 UTC m=+0.083225511 container init b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.081141807 +0000 UTC m=+0.087497307 container start b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.082424789 +0000 UTC m=+0.088780289 container attach b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.010903525 +0000 UTC m=+0.017259046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:42:14 np0005591760 boring_mendel[157683]: {
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:    "0": [
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:        {
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "devices": [
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "/dev/loop3"
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            ],
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "lv_name": "ceph_lv0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "lv_size": "21470642176",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "name": "ceph_lv0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "tags": {
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.cluster_name": "ceph",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.crush_device_class": "",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.encrypted": "0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.osd_id": "0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.type": "block",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.vdo": "0",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:                "ceph.with_tpm": "0"
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            },
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "type": "block",
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:            "vg_name": "ceph_vg0"
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:        }
Jan 22 04:42:14 np0005591760 boring_mendel[157683]:    ]
Jan 22 04:42:14 np0005591760 boring_mendel[157683]: }
Jan 22 04:42:14 np0005591760 systemd[1]: libpod-b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc.scope: Deactivated successfully.
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.322971253 +0000 UTC m=+0.329326753 container died b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:42:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e796f5aa80c48d473ea82ddf7f9ed87b5894b578f11fa3065464d0a7753070e4-merged.mount: Deactivated successfully.
Jan 22 04:42:14 np0005591760 podman[157670]: 2026-01-22 09:42:14.347179648 +0000 UTC m=+0.353535148 container remove b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:42:14 np0005591760 systemd[1]: libpod-conmon-b96b4434f15f3d6a16679ffc57d3a6bdbf3502a6d65aa0556ca8f12e52799cbc.scope: Deactivated successfully.
Jan 22 04:42:14 np0005591760 python3.9[157825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.765480323 +0000 UTC m=+0.031071764 container create 4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_saha, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:42:14 np0005591760 systemd[1]: Started libpod-conmon-4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6.scope.
Jan 22 04:42:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.812927805 +0000 UTC m=+0.078519246 container init 4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.817737715 +0000 UTC m=+0.083329146 container start 4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_saha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.818924035 +0000 UTC m=+0.084515487 container attach 4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_saha, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:42:14 np0005591760 relaxed_saha[158018]: 167 167
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.820571705 +0000 UTC m=+0.086163137 container died 4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:42:14 np0005591760 systemd[1]: libpod-4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6.scope: Deactivated successfully.
Jan 22 04:42:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b297ed9c93bb5c07c0e8723e044391c518469dbe17caa38473fba7d2270366bc-merged.mount: Deactivated successfully.
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.842458518 +0000 UTC m=+0.108049949 container remove 4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:42:14 np0005591760 podman[157981]: 2026-01-22 09:42:14.752661731 +0000 UTC m=+0.018253183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:42:14 np0005591760 systemd[1]: libpod-conmon-4611053290dc061abe3723a3d68c57254e1505f28f9dfa4ea19c2d528bb7ded6.scope: Deactivated successfully.
Jan 22 04:42:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:14 np0005591760 podman[158066]: 2026-01-22 09:42:14.962920492 +0000 UTC m=+0.027773388 container create 5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:42:14 np0005591760 python3.9[158050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074934.1543005-369-44620974997643/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:14 np0005591760 systemd[1]: Started libpod-conmon-5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368.scope.
Jan 22 04:42:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9594eb6a8bd61c8714d8986228bc04872c9c67dc91a2b340f0d6a1a8cc893e18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9594eb6a8bd61c8714d8986228bc04872c9c67dc91a2b340f0d6a1a8cc893e18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9594eb6a8bd61c8714d8986228bc04872c9c67dc91a2b340f0d6a1a8cc893e18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9594eb6a8bd61c8714d8986228bc04872c9c67dc91a2b340f0d6a1a8cc893e18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:15 np0005591760 podman[158066]: 2026-01-22 09:42:15.023980984 +0000 UTC m=+0.088833880 container init 5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:42:15 np0005591760 podman[158066]: 2026-01-22 09:42:15.029166654 +0000 UTC m=+0.094019551 container start 5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hugle, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:42:15 np0005591760 podman[158066]: 2026-01-22 09:42:15.030349178 +0000 UTC m=+0.095202073 container attach 5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:42:15 np0005591760 podman[158066]: 2026-01-22 09:42:14.95181218 +0000 UTC m=+0.016665096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:42:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:15 np0005591760 python3.9[158255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:15 np0005591760 quirky_hugle[158079]: {}
Jan 22 04:42:15 np0005591760 lvm[158329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:42:15 np0005591760 lvm[158329]: VG ceph_vg0 finished
Jan 22 04:42:15 np0005591760 systemd[1]: libpod-5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368.scope: Deactivated successfully.
Jan 22 04:42:15 np0005591760 podman[158066]: 2026-01-22 09:42:15.539386033 +0000 UTC m=+0.604238939 container died 5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hugle, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:42:15 np0005591760 lvm[158351]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:42:15 np0005591760 lvm[158351]: VG ceph_vg0 finished
Jan 22 04:42:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9594eb6a8bd61c8714d8986228bc04872c9c67dc91a2b340f0d6a1a8cc893e18-merged.mount: Deactivated successfully.
Jan 22 04:42:15 np0005591760 podman[158066]: 2026-01-22 09:42:15.564543551 +0000 UTC m=+0.629396447 container remove 5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:42:15 np0005591760 systemd[1]: libpod-conmon-5f3f2752acd9fe7e266b1e48e2ad97c6c4db49abaea2ed8982c1552db42e6368.scope: Deactivated successfully.
Jan 22 04:42:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:42:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:42:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:15.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:15 np0005591760 python3.9[158464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074935.0965698-369-220539226836105/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c0780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:42:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:42:16 np0005591760 python3.9[158615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:16.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:16.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:16.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:16.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:16.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:17 np0005591760 python3.9[158737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074936.4229617-501-277441604376056/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:17 np0005591760 python3.9[158887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:42:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 22 04:42:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 04:42:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:17.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 04:42:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:17 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:42:17 np0005591760 python3.9[159008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074937.2277377-501-94526753165901/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:42:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2846 writes, 13K keys, 2846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2846 writes, 2846 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2846 writes, 13K keys, 2846 commit groups, 1.0 writes per commit group, ingest: 25.21 MB, 0.04 MB/s#012Interval WAL: 2846 writes, 2846 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    397.3      0.06              0.04         6    0.010       0      0       0.0       0.0#012  L6      1/0   11.61 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.7    459.4    389.2      0.16              0.11         5    0.032     20K   2389       0.0       0.0#012 Sum      1/0   11.61 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7    337.8    391.4      0.22              0.15        11    0.020     20K   2389       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7    339.9    393.5      0.22              0.15        10    0.022     20K   2389       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    459.4    389.2      0.16              0.11         5    0.032     20K   2389       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    405.9      0.06              0.04         5    0.011       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.022#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d6a5b429b0#2 capacity: 304.00 MB usage: 2.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(165,2.09 MB,0.689065%) FilterBlock(12,64.36 KB,0.0206747%) IndexBlock(12,135.48 KB,0.0435227%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 04:42:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:18 np0005591760 python3.9[159159]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:42:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:42:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:18 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c0780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:18 np0005591760 python3.9[159314]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:42:19 np0005591760 python3.9[159466]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:19 np0005591760 python3.9[159544]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:19 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:42:20 np0005591760 python3.9[159697]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:20 np0005591760 python3.9[159775]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:20 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:20.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:20 np0005591760 python3.9[159928]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c1490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:21 np0005591760 python3.9[160080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:42:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:42:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:21 np0005591760 python3.9[160158]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:21 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:42:22 np0005591760 python3.9[160311]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:22 np0005591760 python3.9[160389]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:22 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:22.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:23 np0005591760 python3.9[160542]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:42:23 np0005591760 systemd[1]: Reloading.
Jan 22 04:42:23 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:42:23 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:42:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:23 np0005591760 ovn_controller[154073]: 2026-01-22T09:42:23Z|00025|memory|INFO|16000 kB peak resident set size after 29.8 seconds
Jan 22 04:42:23 np0005591760 ovn_controller[154073]: 2026-01-22T09:42:23Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 22 04:42:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:23.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:23 np0005591760 podman[160703]: 2026-01-22 09:42:23.685394993 +0000 UTC m=+0.064374490 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 04:42:23 np0005591760 python3.9[160748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:23 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c1490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:42:24 np0005591760 python3.9[160834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:42:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:24 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:24 np0005591760 python3.9[160986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:24 np0005591760 python3.9[161065]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:25 np0005591760 python3.9[161217]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:42:25 np0005591760 systemd[1]: Reloading.
Jan 22 04:42:25 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:42:25 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:42:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:25.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:25 np0005591760 systemd[1]: Starting Create netns directory...
Jan 22 04:42:25 np0005591760 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 04:42:25 np0005591760 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 04:42:25 np0005591760 systemd[1]: Finished Create netns directory.
Jan 22 04:42:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:25 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:42:26 np0005591760 python3.9[161411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:26 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c21a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:26 np0005591760 python3.9[161564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:26.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:26.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:27 np0005591760 python3.9[161687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769074946.467426-954-92196784008123/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c010a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:27] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:42:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:27] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:42:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:27.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:27 np0005591760 python3.9[161839]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:27 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:42:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:28 np0005591760 python3.9[161992]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:42:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:28 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:28 np0005591760 python3.9[162145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:29 np0005591760 python3.9[162268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074948.46846-1053-204179447835618/.source.json _original_basename=.3ma__x26 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:29 np0005591760 python3.9[162418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 04:42:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 04:42:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:29 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c21a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:42:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:30 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c018430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:30.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c018430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:31 np0005591760 python3.9[162868]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 22 04:42:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094231 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:42:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:31 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:42:32 np0005591760 python3.9[163020]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 04:42:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:32 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:32 np0005591760 python3[163173]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 04:42:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:32.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c21a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:33 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c018430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:42:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:34 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:34.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:35 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c018430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 22 04:42:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:36 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c21a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:36.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:36.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:36.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:36.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:36.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:37] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:42:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:37] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:42:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:37 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:42:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:38 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c018430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:38.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c21a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:39 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:42:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:40 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06c018430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:41 np0005591760 podman[163184]: 2026-01-22 09:42:41.730754317 +0000 UTC m=+8.838022338 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:42:41 np0005591760 podman[163293]: 2026-01-22 09:42:41.82426248 +0000 UTC m=+0.028329929 container create ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 04:42:41 np0005591760 podman[163293]: 2026-01-22 09:42:41.810427721 +0000 UTC m=+0.014495188 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:42:41 np0005591760 python3[163173]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:42:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:41 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c21a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:42:42 np0005591760 python3.9[163473]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:42:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:42 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:42.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:43 np0005591760 python3.9[163630]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:43 np0005591760 python3.9[163706]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:42:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:43 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb040001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:42:44 np0005591760 python3.9[163858]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769074963.6407692-1287-159163488767074/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:44 np0005591760 python3.9[163934]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:42:44 np0005591760 systemd[1]: Reloading.
Jan 22 04:42:44 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:42:44 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:42:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:44 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900c2eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:44.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:45 np0005591760 python3.9[164046]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:42:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:45 np0005591760 systemd[1]: Reloading.
Jan 22 04:42:45 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:42:45 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:42:45 np0005591760 systemd[1]: Starting ovn_metadata_agent container...
Jan 22 04:42:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:42:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76590d42648a32a1a2a6d4a10519979c2082684bc18ab001e6c354e899e3c1cd/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76590d42648a32a1a2a6d4a10519979c2082684bc18ab001e6c354e899e3c1cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 04:42:45 np0005591760 systemd[1]: Started /usr/bin/podman healthcheck run ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09.
Jan 22 04:42:45 np0005591760 podman[164086]: 2026-01-22 09:42:45.553396977 +0000 UTC m=+0.076723405 container init ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + sudo -E kolla_set_configs
Jan 22 04:42:45 np0005591760 podman[164086]: 2026-01-22 09:42:45.572902783 +0000 UTC m=+0.096229213 container start ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 04:42:45 np0005591760 edpm-start-podman-container[164086]: ovn_metadata_agent
Jan 22 04:42:45 np0005591760 podman[164104]: 2026-01-22 09:42:45.626152239 +0000 UTC m=+0.042974497 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 04:42:45 np0005591760 edpm-start-podman-container[164085]: Creating additional drop-in dependency for "ovn_metadata_agent" (ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09)
Jan 22 04:42:45 np0005591760 systemd[1]: Reloading.
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Validating config file
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Copying service configuration files
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Writing out command to execute
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: ++ cat /run_command
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + CMD=neutron-ovn-metadata-agent
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + ARGS=
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + sudo kolla_copy_cacerts
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + [[ ! -n '' ]]
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + . kolla_extend_start
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: Running command: 'neutron-ovn-metadata-agent'
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + umask 0022
Jan 22 04:42:45 np0005591760 ovn_metadata_agent[164098]: + exec neutron-ovn-metadata-agent
Jan 22 04:42:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:45 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:42:45 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:42:45 np0005591760 systemd[1]: Started ovn_metadata_agent container.
Jan 22 04:42:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:45 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:42:46 np0005591760 python3.9[164330]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 04:42:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:46 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:46.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:46.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:46.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:46.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.257 164103 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.257 164103 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.257 164103 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.257 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.258 164103 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.259 164103 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.260 164103 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.261 164103 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.262 164103 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.263 164103 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.264 164103 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.265 164103 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.266 164103 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.267 164103 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.268 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.269 164103 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.270 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.271 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.272 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.273 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.274 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.275 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.276 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.277 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.278 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.279 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.280 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.281 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.282 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.283 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.284 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.285 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.286 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.287 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.288 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.289 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.289 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.289 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.289 164103 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.289 164103 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.296 164103 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.296 164103 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.296 164103 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.297 164103 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.297 164103 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.309 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e200ec57-2c57-4374-93b1-e04a1348b8ea (UUID: e200ec57-2c57-4374-93b1-e04a1348b8ea) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.325 164103 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.325 164103 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.325 164103 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.325 164103 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.327 164103 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.333 164103 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.336 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e200ec57-2c57-4374-93b1-e04a1348b8ea'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], external_ids={}, name=e200ec57-2c57-4374-93b1-e04a1348b8ea, nb_cfg_timestamp=1769074921858, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.337 164103 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f4a0d293a00>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.338 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.338 164103 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.338 164103 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.338 164103 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.342 164103 DEBUG oslo_service.service [-] Started child 164382 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.344 164103 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpkwxvkg0i/privsep.sock']#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.344 164382 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-497288'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.361 164382 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.361 164382 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.361 164382 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.363 164382 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.368 164382 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.372 164382 INFO eventlet.wsgi.server [-] (164382) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 22 04:42:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:47] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:42:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:47] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:42:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 04:42:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 04:42:47 np0005591760 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.878 164103 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.879 164103 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkwxvkg0i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.803 164492 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.807 164492 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.808 164492 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.808 164492 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164492#033[00m
Jan 22 04:42:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:47.881 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[27b80a25-1b8c-4f6a-bfd5-57d46ee9c142]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:42:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:47 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:42:48 np0005591760 python3.9[164491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:42:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.306 164492 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.306 164492 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.307 164492 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:42:48 np0005591760 python3.9[164622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769074967.3413243-1422-102082039350331/.source.yaml _original_basename=._zkdajv1 follow=False checksum=7bc24a5e53d8c45e39faf7b2fbdd2561f35405e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:42:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:48 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.756 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[b88c6c88-d4d8-4195-acd1-8fa7c5682665]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.758 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, column=external_ids, values=({'neutron:ovn-metadata-id': '8a472236-07bc-5e38-9238-ec916d66b647'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.763 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.767 164103 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.767 164103 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.767 164103 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.767 164103 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.767 164103 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.768 164103 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.769 164103 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.769 164103 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.769 164103 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.769 164103 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.769 164103 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.769 164103 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.770 164103 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.771 164103 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.772 164103 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.773 164103 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.774 164103 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.775 164103 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.776 164103 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.777 164103 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.778 164103 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.779 164103 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.780 164103 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.781 164103 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.782 164103 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.783 164103 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.784 164103 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.785 164103 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.786 164103 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.787 164103 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.788 164103 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.789 164103 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.790 164103 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.791 164103 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.792 164103 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.793 164103 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.794 164103 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.795 164103 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.796 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.797 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.798 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:42:48 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:42:48.799 164103 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 04:42:48 np0005591760 systemd[1]: session-51.scope: Deactivated successfully.
Jan 22 04:42:48 np0005591760 systemd[1]: session-51.scope: Consumed 41.025s CPU time.
Jan 22 04:42:48 np0005591760 systemd-logind[747]: Session 51 logged out. Waiting for processes to exit.
Jan 22 04:42:48 np0005591760 systemd-logind[747]: Removed session 51.
Jan 22 04:42:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:42:49
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['default.rgw.control', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data']
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:42:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:49.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:49 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:42:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:50 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098006a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:51 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:42:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:52 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:52.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098006a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:53.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:53 np0005591760 systemd-logind[747]: New session 52 of user zuul.
Jan 22 04:42:53 np0005591760 systemd[1]: Started Session 52 of User zuul.
Jan 22 04:42:53 np0005591760 podman[164681]: 2026-01-22 09:42:53.800301578 +0000 UTC m=+0.060581198 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 04:42:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:53 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:42:54 np0005591760 python3.9[164856]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:42:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:54 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:55 np0005591760 python3.9[165013]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:42:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:55.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:55 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:42:56 np0005591760 python3.9[165175]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:42:56 np0005591760 systemd[1]: Reloading.
Jan 22 04:42:56 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:42:56 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:42:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:56 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:56.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:56.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:56.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:42:56.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:42:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:56.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098006a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:57 np0005591760 python3.9[165361]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:42:57 np0005591760 network[165378]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:42:57 np0005591760 network[165379]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:42:57 np0005591760 network[165380]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:42:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:57] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Jan 22 04:42:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:42:57] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Jan 22 04:42:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:57.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:57 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:42:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:42:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:58 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:42:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:42:58.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:42:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094004570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:42:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:42:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:42:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:42:59.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:42:59 np0005591760 python3.9[165644]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:42:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:42:59 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098007b30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:42:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:00 np0005591760 python3.9[165798]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:43:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:00 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:00 np0005591760 python3.9[165952]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:43:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:01.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:01 np0005591760 python3.9[166105]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:43:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:01 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094004570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:43:01 np0005591760 python3.9[166258]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:43:02 np0005591760 python3.9[166412]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:43:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:02 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098007b30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:43:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:03.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:43:03 np0005591760 python3.9[166566]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:43:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:03 np0005591760 python3.9[166719]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:03 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:04 np0005591760 python3.9[166872]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:04 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094004570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:04 np0005591760 python3.9[167024]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:05.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:05 np0005591760 python3.9[167177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098008840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:05 np0005591760 python3.9[167329]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:05.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:05 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:43:05 np0005591760 python3.9[167481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:06 np0005591760 python3.9[167634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:06 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:06 np0005591760 python3.9[167787]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:06.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:06.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:07.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094005a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:07 np0005591760 python3.9[167939]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:07] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:43:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:07] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:43:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:07 np0005591760 python3.9[168091]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:07 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098008840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:08 np0005591760 python3.9[168244]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:08 np0005591760 python3.9[168396]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:08 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088005850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:08 np0005591760 python3.9[168549]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:09.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:09 np0005591760 python3.9[168701]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:43:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:09 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094005a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:10 np0005591760 python3.9[168854]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:10 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094005a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:10 np0005591760 python3.9[169031]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 04:43:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:11.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088005870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:11 np0005591760 python3.9[169184]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:43:11 np0005591760 systemd[1]: Reloading.
Jan 22 04:43:11 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:43:11 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:43:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:11 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:43:12 np0005591760 python3.9[169372]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:12 np0005591760 python3.9[169525]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:12 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098009550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:12 np0005591760 python3.9[169679]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 22 04:43:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:13.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 22 04:43:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094006b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:13 np0005591760 python3.9[169832]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 22 04:43:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:13.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 22 04:43:13 np0005591760 python3.9[169985]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:13 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088005890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:14 np0005591760 python3.9[170139]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:14 np0005591760 python3.9[170292]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:43:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:14 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb03c00bba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:15.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb098009550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 22 04:43:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 22 04:43:15 np0005591760 python3.9[170446]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 22 04:43:15 np0005591760 podman[170448]: 2026-01-22 09:43:15.841045954 +0000 UTC m=+0.043470879 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:43:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:15 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094006b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:16 np0005591760 python3.9[170732]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 04:43:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[108936]: 22/01/2026 09:43:16 : epoch 6971efde : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb094006b00 fd 48 proxy ignored for local
Jan 22 04:43:16 np0005591760 kernel: ganesha.nfsd[163476]: segfault at 50 ip 00007fb0c5be132e sp 00007fb0317f9210 error 4 in libntirpc.so.5.8[7fb0c5bc6000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 22 04:43:16 np0005591760 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 22 04:43:16 np0005591760 systemd[1]: Started Process Core Dump (PID 170847/UID 0).
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:43:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:16.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:16.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:16.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:16.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:17.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:17 np0005591760 python3.9[170974]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 04:43:17 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:43:17 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:17 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.188138413 +0000 UTC m=+0.030117343 container create 1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:43:17 np0005591760 systemd[1]: Started libpod-conmon-1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176.scope.
Jan 22 04:43:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.242139516 +0000 UTC m=+0.084118476 container init 1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.247205147 +0000 UTC m=+0.089184068 container start 1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.248383628 +0000 UTC m=+0.090362558 container attach 1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_volhard, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:43:17 np0005591760 xenodochial_volhard[171048]: 167 167
Jan 22 04:43:17 np0005591760 systemd[1]: libpod-1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176.scope: Deactivated successfully.
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.252386982 +0000 UTC m=+0.094365922 container died 1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:43:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c1dd70b18edc0a04eb73d59be608af1e3607647006689b876613c57f5e848141-merged.mount: Deactivated successfully.
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.272186224 +0000 UTC m=+0.114165155 container remove 1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:43:17 np0005591760 podman[171014]: 2026-01-22 09:43:17.175748357 +0000 UTC m=+0.017727308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:17 np0005591760 systemd[1]: libpod-conmon-1ea55c229c6a766a8f82c78453881edcc456cda9dea6ba2ff3d546ec8b0da176.scope: Deactivated successfully.
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.400313635 +0000 UTC m=+0.031211001 container create e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclean, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:43:17 np0005591760 systemd[1]: Started libpod-conmon-e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917.scope.
Jan 22 04:43:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:43:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8232ec9dba21ebdffbc402e5fc09452bd984fd3f05856f5b38370905d78ad5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8232ec9dba21ebdffbc402e5fc09452bd984fd3f05856f5b38370905d78ad5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8232ec9dba21ebdffbc402e5fc09452bd984fd3f05856f5b38370905d78ad5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8232ec9dba21ebdffbc402e5fc09452bd984fd3f05856f5b38370905d78ad5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8232ec9dba21ebdffbc402e5fc09452bd984fd3f05856f5b38370905d78ad5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.460163616 +0000 UTC m=+0.091060992 container init e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclean, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.465180123 +0000 UTC m=+0.096077489 container start e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclean, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.466302556 +0000 UTC m=+0.097199922 container attach e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclean, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.387590884 +0000 UTC m=+0.018488279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:17] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:43:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:17] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:43:17 np0005591760 stoic_mclean[171086]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:43:17 np0005591760 stoic_mclean[171086]: --> All data devices are unavailable
Jan 22 04:43:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:17.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:17 np0005591760 systemd[1]: libpod-e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917.scope: Deactivated successfully.
Jan 22 04:43:17 np0005591760 conmon[171086]: conmon e5cbae0c304a0389dff7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917.scope/container/memory.events
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.745165675 +0000 UTC m=+0.376063041 container died e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:43:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ee8232ec9dba21ebdffbc402e5fc09452bd984fd3f05856f5b38370905d78ad5-merged.mount: Deactivated successfully.
Jan 22 04:43:17 np0005591760 podman[171073]: 2026-01-22 09:43:17.773395864 +0000 UTC m=+0.404293230 container remove e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:43:17 np0005591760 systemd[1]: libpod-conmon-e5cbae0c304a0389dff735c5dd2029dbcf1109eae7168ffdd987a177f02e4917.scope: Deactivated successfully.
Jan 22 04:43:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:17 np0005591760 systemd-coredump[170848]: Process 108940 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 78:#012#0  0x00007fb0c5be132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007fb0c5beb900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Jan 22 04:43:17 np0005591760 python3.9[171228]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:43:18 np0005591760 systemd[1]: systemd-coredump@1-170847-0.service: Deactivated successfully.
Jan 22 04:43:18 np0005591760 systemd[1]: systemd-coredump@1-170847-0.service: Consumed 1.205s CPU time.
Jan 22 04:43:18 np0005591760 podman[171299]: 2026-01-22 09:43:18.06688651 +0000 UTC m=+0.025818777 container died d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:43:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ac94927b5ced4e39e638c291cc4aeb409319ffe20f106322321371fbd76b0b52-merged.mount: Deactivated successfully.
Jan 22 04:43:18 np0005591760 podman[171299]: 2026-01-22 09:43:18.085324433 +0000 UTC m=+0.044256691 container remove d791c845824805d9759eea399ab1b77ce3a5ac18664d4cdeedcfbb4e670a4815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:43:18 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Main process exited, code=exited, status=139/n/a
Jan 22 04:43:18 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Failed with result 'exit-code'.
Jan 22 04:43:18 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.152s CPU time.
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.227202609 +0000 UTC m=+0.029065294 container create 56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:43:18 np0005591760 systemd[1]: Started libpod-conmon-56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b.scope.
Jan 22 04:43:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.288695497 +0000 UTC m=+0.090558193 container init 56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_robinson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.292873613 +0000 UTC m=+0.094736299 container start 56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.293933486 +0000 UTC m=+0.095796163 container attach 56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_robinson, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:43:18 np0005591760 hardcore_robinson[171380]: 167 167
Jan 22 04:43:18 np0005591760 systemd[1]: libpod-56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b.scope: Deactivated successfully.
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.297053014 +0000 UTC m=+0.098915699 container died 56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:43:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5630e07a9e7c8a2655bef45f045b3f850de6091763b203689e1c5107b924ab76-merged.mount: Deactivated successfully.
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.215627819 +0000 UTC m=+0.017490526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:18 np0005591760 podman[171364]: 2026-01-22 09:43:18.317042719 +0000 UTC m=+0.118905415 container remove 56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:43:18 np0005591760 systemd[1]: libpod-conmon-56b9fce6c643082a1e6a83735dd0692a495325bb423f938c66b83e57bd83b05b.scope: Deactivated successfully.
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.443877873 +0000 UTC m=+0.037139932 container create 7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_euler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:43:18 np0005591760 systemd[1]: Started libpod-conmon-7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7.scope.
Jan 22 04:43:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:43:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c06569d6f76de2bb68e2a357c4f7be21b16df6be9c75699baf750de8ea24b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c06569d6f76de2bb68e2a357c4f7be21b16df6be9c75699baf750de8ea24b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c06569d6f76de2bb68e2a357c4f7be21b16df6be9c75699baf750de8ea24b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c06569d6f76de2bb68e2a357c4f7be21b16df6be9c75699baf750de8ea24b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.493396488 +0000 UTC m=+0.086658557 container init 7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.498656251 +0000 UTC m=+0.091918309 container start 7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.499893303 +0000 UTC m=+0.093155362 container attach 7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_euler, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.430261756 +0000 UTC m=+0.023523835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:18 np0005591760 python3.9[171486]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:43:18 np0005591760 youthful_euler[171491]: {
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:    "0": [
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:        {
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "devices": [
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "/dev/loop3"
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            ],
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "lv_name": "ceph_lv0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "lv_size": "21470642176",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "name": "ceph_lv0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "tags": {
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.cluster_name": "ceph",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.crush_device_class": "",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.encrypted": "0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.osd_id": "0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.type": "block",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.vdo": "0",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:                "ceph.with_tpm": "0"
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            },
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "type": "block",
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:            "vg_name": "ceph_vg0"
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:        }
Jan 22 04:43:18 np0005591760 youthful_euler[171491]:    ]
Jan 22 04:43:18 np0005591760 youthful_euler[171491]: }
Jan 22 04:43:18 np0005591760 systemd[1]: libpod-7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7.scope: Deactivated successfully.
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.736235836 +0000 UTC m=+0.329497915 container died 7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:43:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-585c06569d6f76de2bb68e2a357c4f7be21b16df6be9c75699baf750de8ea24b-merged.mount: Deactivated successfully.
Jan 22 04:43:18 np0005591760 podman[171452]: 2026-01-22 09:43:18.762823079 +0000 UTC m=+0.356085138 container remove 7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:43:18 np0005591760 systemd[1]: libpod-conmon-7eec31f1e0ccb7aa4f2b690e395b86f24fc37f03249191d6557fc2ccdf3ee9d7.scope: Deactivated successfully.
Jan 22 04:43:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:19.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.192300853 +0000 UTC m=+0.026527699 container create aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:43:19 np0005591760 systemd[1]: Started libpod-conmon-aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a.scope.
Jan 22 04:43:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.240562007 +0000 UTC m=+0.074788863 container init aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.245650813 +0000 UTC m=+0.079877659 container start aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.246678345 +0000 UTC m=+0.080905191 container attach aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:43:19 np0005591760 great_blackburn[171607]: 167 167
Jan 22 04:43:19 np0005591760 systemd[1]: libpod-aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a.scope: Deactivated successfully.
Jan 22 04:43:19 np0005591760 conmon[171607]: conmon aebbc0ea6c20a98eb1fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a.scope/container/memory.events
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.249717568 +0000 UTC m=+0.083944414 container died aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:43:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-83ac643dea6fe59f487dab79c2be25cec9df4b9f12fce76b8b77044c278c326b-merged.mount: Deactivated successfully.
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.182110156 +0000 UTC m=+0.016337022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:19 np0005591760 podman[171594]: 2026-01-22 09:43:19.284770343 +0000 UTC m=+0.118997190 container remove aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:43:19 np0005591760 systemd[1]: libpod-conmon-aebbc0ea6c20a98eb1fee0038ff58296f759ceefba4a451e2920ffd836e9f64a.scope: Deactivated successfully.
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.400370145 +0000 UTC m=+0.027019859 container create e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:43:19 np0005591760 systemd[1]: Started libpod-conmon-e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c.scope.
Jan 22 04:43:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:43:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0af62fd3d374d964fb1b71b31917d295295db1a01c361a87e1d63f9218a7f36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0af62fd3d374d964fb1b71b31917d295295db1a01c361a87e1d63f9218a7f36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0af62fd3d374d964fb1b71b31917d295295db1a01c361a87e1d63f9218a7f36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0af62fd3d374d964fb1b71b31917d295295db1a01c361a87e1d63f9218a7f36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.449533693 +0000 UTC m=+0.076183396 container init e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_moser, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.454803936 +0000 UTC m=+0.081453639 container start e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_moser, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.455760902 +0000 UTC m=+0.082410605 container attach e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.389178958 +0000 UTC m=+0.015828681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:19 np0005591760 lvm[171718]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:43:19 np0005591760 lvm[171718]: VG ceph_vg0 finished
Jan 22 04:43:19 np0005591760 hopeful_moser[171642]: {}
Jan 22 04:43:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.934671297 +0000 UTC m=+0.561321000 container died e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_moser, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:43:19 np0005591760 systemd[1]: libpod-e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c.scope: Deactivated successfully.
Jan 22 04:43:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b0af62fd3d374d964fb1b71b31917d295295db1a01c361a87e1d63f9218a7f36-merged.mount: Deactivated successfully.
Jan 22 04:43:19 np0005591760 podman[171629]: 2026-01-22 09:43:19.958597289 +0000 UTC m=+0.585246992 container remove e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 04:43:19 np0005591760 systemd[1]: libpod-conmon-e0bb0071b3ce77700a8f762b93356b5c90e45dca1bc380dab78c3234097e442c.scope: Deactivated successfully.
Jan 22 04:43:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:43:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:43:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:43:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 22 04:43:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 22 04:43:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:21.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:43:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094322 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:43:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:23.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:24 np0005591760 podman[171772]: 2026-01-22 09:43:24.07097243 +0000 UTC m=+0.062368108 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:43:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:43:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:26.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:26.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:26.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:26.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:27.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:27] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:43:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:27] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:43:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:43:28 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Scheduled restart job, restart counter is at 2.
Jan 22 04:43:28 np0005591760 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:43:28 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.152s CPU time.
Jan 22 04:43:28 np0005591760 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:43:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:28 np0005591760 podman[171912]: 2026-01-22 09:43:28.384845965 +0000 UTC m=+0.026473435 container create 46d5950a52b3690cb19824ace85936eab3706e2eeb389f6f32b1f8abcb394b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:43:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651b8bb6a59cba2c5bc7e4b8122a1158f6fe5c6cf639d0e3129f829aaf3e1e86/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651b8bb6a59cba2c5bc7e4b8122a1158f6fe5c6cf639d0e3129f829aaf3e1e86/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651b8bb6a59cba2c5bc7e4b8122a1158f6fe5c6cf639d0e3129f829aaf3e1e86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651b8bb6a59cba2c5bc7e4b8122a1158f6fe5c6cf639d0e3129f829aaf3e1e86/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ylzmiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:43:28 np0005591760 podman[171912]: 2026-01-22 09:43:28.420486288 +0000 UTC m=+0.062113769 container init 46d5950a52b3690cb19824ace85936eab3706e2eeb389f6f32b1f8abcb394b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:43:28 np0005591760 podman[171912]: 2026-01-22 09:43:28.42675353 +0000 UTC m=+0.068381000 container start 46d5950a52b3690cb19824ace85936eab3706e2eeb389f6f32b1f8abcb394b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:43:28 np0005591760 bash[171912]: 46d5950a52b3690cb19824ace85936eab3706e2eeb389f6f32b1f8abcb394b28
Jan 22 04:43:28 np0005591760 podman[171912]: 2026-01-22 09:43:28.373061998 +0000 UTC m=+0.014689488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:43:28 np0005591760 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 22 04:43:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:43:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:43:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8499 writes, 32K keys, 8499 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 8499 writes, 2181 syncs, 3.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8499 writes, 32K keys, 8499 commit groups, 1.0 writes per commit group, ingest: 20.89 MB, 0.03 MB/s#012Interval WAL: 8499 writes, 2181 syncs, 3.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 22 04:43:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:43:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:31.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:43:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:43:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:43:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:43:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:43:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:36.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:36.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:36.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:36.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:37.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:37] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:43:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:37] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:43:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:37.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:43:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:43:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:39.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:43:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:43:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1220000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:40 np0005591760 kernel: SELinux:  Converting 2782 SID table entries...
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:43:40 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:43:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:41.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:41 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:43:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:41 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1220001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094342 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:43:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:42 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224004730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:43.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218001e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:43.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:43:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:44 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1220002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:45.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:45 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224004730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:43:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:45 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:45 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 22 04:43:46 np0005591760 podman[172136]: 2026-01-22 09:43:46.07429496 +0000 UTC m=+0.041272438 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 04:43:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:46.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:47.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:47 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1220002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:43:47.298 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:43:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:43:47.298 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:43:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:43:47.299 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:43:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:47] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:43:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:47] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Jan 22 04:43:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:47.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:47 np0005591760 kernel: SELinux:  Converting 2782 SID table entries...
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:43:47 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:43:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:43:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:47 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224004730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:48 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:43:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:49.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:43:49
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.meta']
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:43:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:49 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:43:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:49.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:43:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:49 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1220002270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:50 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 22 04:43:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:50 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:51.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:51 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:51.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:43:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:51 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:52 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12200095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:53 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:53.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:43:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:53 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:54 np0005591760 podman[172193]: 2026-01-22 09:43:54.315261315 +0000 UTC m=+0.055261284 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:43:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:54 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:55.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:55 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12200095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:43:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:43:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:43:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:55 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:56 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:56.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:56.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:43:56.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:43:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:57.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:57 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12200095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:57] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:43:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:43:57] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Jan 22 04:43:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:57 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:43:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:58 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:43:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:59 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:43:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:43:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:43:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:43:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:43:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:43:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:43:59 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12200095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:00 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:01.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:01 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:44:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:01 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:02 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:03 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:03.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:44:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:03 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:04 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:05.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:05 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:44:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:05 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:06 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:06.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:06.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:06.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:07.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:07 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:07] "GET /metrics HTTP/1.1" 200 48392 "" "Prometheus/2.51.0"
Jan 22 04:44:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:07] "GET /metrics HTTP/1.1" 200 48392 "" "Prometheus/2.51.0"
Jan 22 04:44:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:07.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:44:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:07 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:08 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224005830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:09.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:09 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:44:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:09 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:10 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:11.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:11 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224006930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:11.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:44:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:11 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:12 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:13.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:13 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1228002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:13.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:44:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:13 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224006930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094414 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:44:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:14 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224006930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:15.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:15 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:44:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:15 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1228003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:16 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:16.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:16.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:16.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:16.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:17 np0005591760 podman[189134]: 2026-01-22 09:44:17.060365042 +0000 UTC m=+0.041024474 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 04:44:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:17.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:17 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224006930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:17] "GET /metrics HTTP/1.1" 200 48392 "" "Prometheus/2.51.0"
Jan 22 04:44:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:17] "GET /metrics HTTP/1.1" 200 48392 "" "Prometheus/2.51.0"
Jan 22 04:44:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:44:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:17 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1228003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:19 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:44:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:44:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:19 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224006930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:44:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:44:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:20 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:21.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:21 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1228003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.427389683 +0000 UTC m=+0.027199794 container create e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_moser, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 22 04:44:21 np0005591760 systemd[1]: Started libpod-conmon-e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce.scope.
Jan 22 04:44:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.486656471 +0000 UTC m=+0.086466582 container init e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_moser, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.491688393 +0000 UTC m=+0.091498494 container start e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.494037174 +0000 UTC m=+0.093847275 container attach e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:44:21 np0005591760 musing_moser[189329]: 167 167
Jan 22 04:44:21 np0005591760 systemd[1]: libpod-e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce.scope: Deactivated successfully.
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.496732559 +0000 UTC m=+0.096542661 container died e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_moser, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:44:21 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d8ffeb970548b282d74031d7da1e7afd9e9caaf58fcfc0e1fab5ecb8f4734a78-merged.mount: Deactivated successfully.
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.415430542 +0000 UTC m=+0.015240653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:44:21 np0005591760 podman[189316]: 2026-01-22 09:44:21.520637016 +0000 UTC m=+0.120447117 container remove e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:44:21 np0005591760 systemd[1]: libpod-conmon-e181272027cc83bfebef09a7d9b6f51e8a68fbcb9606b045c786d04bf523a3ce.scope: Deactivated successfully.
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:21 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.641581219 +0000 UTC m=+0.028643698 container create db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_haibt, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:44:21 np0005591760 systemd[1]: Started libpod-conmon-db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25.scope.
Jan 22 04:44:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:44:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9322dda1d210a4683af9bccb1656bb53683167a1b4215eec0415becf61303be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9322dda1d210a4683af9bccb1656bb53683167a1b4215eec0415becf61303be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9322dda1d210a4683af9bccb1656bb53683167a1b4215eec0415becf61303be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9322dda1d210a4683af9bccb1656bb53683167a1b4215eec0415becf61303be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9322dda1d210a4683af9bccb1656bb53683167a1b4215eec0415becf61303be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.691990467 +0000 UTC m=+0.079052966 container init db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_haibt, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.699091763 +0000 UTC m=+0.086154242 container start db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.701223193 +0000 UTC m=+0.088285672 container attach db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_haibt, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.630410835 +0000 UTC m=+0.017473325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:44:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:21 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:44:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:44:21 np0005591760 eager_haibt[189366]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:44:21 np0005591760 eager_haibt[189366]: --> All data devices are unavailable
Jan 22 04:44:21 np0005591760 systemd[1]: libpod-db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25.scope: Deactivated successfully.
Jan 22 04:44:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:21 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:21 np0005591760 conmon[189366]: conmon db076093b7cce94b1918 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25.scope/container/memory.events
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.972500512 +0000 UTC m=+0.359562991 container died db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:44:21 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c9322dda1d210a4683af9bccb1656bb53683167a1b4215eec0415becf61303be-merged.mount: Deactivated successfully.
Jan 22 04:44:21 np0005591760 podman[189353]: 2026-01-22 09:44:21.995389793 +0000 UTC m=+0.382452272 container remove db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:44:22 np0005591760 systemd[1]: libpod-conmon-db076093b7cce94b19184c7618d1bb478d84fcc824d4efd58a99745f0d724d25.scope: Deactivated successfully.
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.39002622 +0000 UTC m=+0.025448631 container create 4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:44:22 np0005591760 systemd[1]: Started libpod-conmon-4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf.scope.
Jan 22 04:44:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.440094896 +0000 UTC m=+0.075517327 container init 4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.444565249 +0000 UTC m=+0.079987659 container start 4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.445604198 +0000 UTC m=+0.081026609 container attach 4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dubinsky, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:44:22 np0005591760 frosty_dubinsky[189484]: 167 167
Jan 22 04:44:22 np0005591760 systemd[1]: libpod-4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf.scope: Deactivated successfully.
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.447500255 +0000 UTC m=+0.082922666 container died 4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dubinsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:44:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e9f38ad76a915e933243326f2188417a4e3b2767b3d5ae0c85065c87bcf0f49b-merged.mount: Deactivated successfully.
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.46584853 +0000 UTC m=+0.101270942 container remove 4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:44:22 np0005591760 podman[189471]: 2026-01-22 09:44:22.379486247 +0000 UTC m=+0.014908678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:44:22 np0005591760 systemd[1]: libpod-conmon-4112287b71c8bacd4a194d004c9e9bf48dd50fceede243a1ce9c5479b80c7dbf.scope: Deactivated successfully.
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.586272531 +0000 UTC m=+0.028462486 container create 6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 22 04:44:22 np0005591760 systemd[1]: Started libpod-conmon-6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c.scope.
Jan 22 04:44:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:44:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f02d542566ac0198dfc7a9715247772809db53ea43f4f387ba48c553b02977b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f02d542566ac0198dfc7a9715247772809db53ea43f4f387ba48c553b02977b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f02d542566ac0198dfc7a9715247772809db53ea43f4f387ba48c553b02977b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f02d542566ac0198dfc7a9715247772809db53ea43f4f387ba48c553b02977b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.639379308 +0000 UTC m=+0.081569273 container init 6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.644952502 +0000 UTC m=+0.087142457 container start 6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.647446728 +0000 UTC m=+0.089636683 container attach 6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:44:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:22 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.574372933 +0000 UTC m=+0.016562908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:44:22 np0005591760 romantic_wu[189519]: {
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:    "0": [
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:        {
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "devices": [
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "/dev/loop3"
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            ],
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "lv_name": "ceph_lv0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "lv_size": "21470642176",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "name": "ceph_lv0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "tags": {
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.cluster_name": "ceph",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.crush_device_class": "",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.encrypted": "0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.osd_id": "0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.type": "block",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.vdo": "0",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:                "ceph.with_tpm": "0"
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            },
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "type": "block",
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:            "vg_name": "ceph_vg0"
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:        }
Jan 22 04:44:22 np0005591760 romantic_wu[189519]:    ]
Jan 22 04:44:22 np0005591760 romantic_wu[189519]: }
Jan 22 04:44:22 np0005591760 systemd[1]: libpod-6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c.scope: Deactivated successfully.
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.876339096 +0000 UTC m=+0.318529051 container died 6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:44:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4f02d542566ac0198dfc7a9715247772809db53ea43f4f387ba48c553b02977b-merged.mount: Deactivated successfully.
Jan 22 04:44:22 np0005591760 podman[189506]: 2026-01-22 09:44:22.902719312 +0000 UTC m=+0.344909268 container remove 6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:44:22 np0005591760 systemd[1]: libpod-conmon-6d97ad0d5207caa09fe9c0aa8db137c703cc189a694ad9c43d4a9d1dc970932c.scope: Deactivated successfully.
Jan 22 04:44:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:23.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:23 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:23 np0005591760 podman[189625]: 2026-01-22 09:44:23.315576223 +0000 UTC m=+0.017518490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:44:23 np0005591760 podman[189625]: 2026-01-22 09:44:23.693215754 +0000 UTC m=+0.395158001 container create f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:44:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:23.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:44:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:23 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:24 np0005591760 systemd[1]: Started libpod-conmon-f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934.scope.
Jan 22 04:44:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:44:24 np0005591760 podman[189625]: 2026-01-22 09:44:24.142439822 +0000 UTC m=+0.844382079 container init f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_burnell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:44:24 np0005591760 podman[189625]: 2026-01-22 09:44:24.14932919 +0000 UTC m=+0.851271436 container start f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:44:24 np0005591760 podman[189625]: 2026-01-22 09:44:24.151604492 +0000 UTC m=+0.853546739 container attach f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 04:44:24 np0005591760 hardcore_burnell[189646]: 167 167
Jan 22 04:44:24 np0005591760 systemd[1]: libpod-f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934.scope: Deactivated successfully.
Jan 22 04:44:24 np0005591760 podman[189625]: 2026-01-22 09:44:24.153507031 +0000 UTC m=+0.855449278 container died f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_burnell, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:44:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-867c40bd4f20d2498b3b17461e05b031133a7f538aac74b33e544bfe0d2af6f8-merged.mount: Deactivated successfully.
Jan 22 04:44:24 np0005591760 podman[189625]: 2026-01-22 09:44:24.173431758 +0000 UTC m=+0.875374006 container remove f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_burnell, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:44:24 np0005591760 systemd[1]: libpod-conmon-f496e7c36b773088065c1c4671d3ff131482fc1fd77eaae729236d47f15f8934.scope: Deactivated successfully.
Jan 22 04:44:24 np0005591760 kernel: SELinux:  Converting 2783 SID table entries...
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability open_perms=1
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability always_check_network=0
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 04:44:24 np0005591760 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.297753694 +0000 UTC m=+0.030304771 container create 3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:44:24 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 22 04:44:24 np0005591760 systemd[1]: Started libpod-conmon-3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4.scope.
Jan 22 04:44:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:44:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914259f596e63095040d46b0791b20c5a9ce2530a3e0e982f6959b395b42af3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914259f596e63095040d46b0791b20c5a9ce2530a3e0e982f6959b395b42af3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914259f596e63095040d46b0791b20c5a9ce2530a3e0e982f6959b395b42af3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914259f596e63095040d46b0791b20c5a9ce2530a3e0e982f6959b395b42af3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.362949437 +0000 UTC m=+0.095500504 container init 3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.370477589 +0000 UTC m=+0.103028656 container start 3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lehmann, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.371515166 +0000 UTC m=+0.104066233 container attach 3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.284078986 +0000 UTC m=+0.016630074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:44:24 np0005591760 podman[189687]: 2026-01-22 09:44:24.415389366 +0000 UTC m=+0.074279870 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 04:44:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:44:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:44:24 np0005591760 great_lehmann[189686]: {}
Jan 22 04:44:24 np0005591760 lvm[189790]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:44:24 np0005591760 lvm[189790]: VG ceph_vg0 finished
Jan 22 04:44:24 np0005591760 systemd[1]: libpod-3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4.scope: Deactivated successfully.
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.883718254 +0000 UTC m=+0.616269321 container died 3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lehmann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:44:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-914259f596e63095040d46b0791b20c5a9ce2530a3e0e982f6959b395b42af3a-merged.mount: Deactivated successfully.
Jan 22 04:44:24 np0005591760 podman[189671]: 2026-01-22 09:44:24.910989862 +0000 UTC m=+0.643540929 container remove 3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:44:24 np0005591760 systemd[1]: libpod-conmon-3ac3e558ab10d82bfd8525235092fc8f6f7ee516d8661aa81a79501762fe6df4.scope: Deactivated successfully.
Jan 22 04:44:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:44:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:44:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:24 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:44:24 np0005591760 dbus-broker-launch[714]: Noticed file-system modification, trigger reload.
Jan 22 04:44:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:25.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:25 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224007910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:25 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:25 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:44:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:44:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:25 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224007910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:26 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:26.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:26.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:26.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:26.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:27.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:27 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:27] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:44:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:27] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Jan 22 04:44:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:27.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:27 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:44:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:44:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:27 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:29 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:44:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:29 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:30 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:30 np0005591760 systemd[1]: Stopping OpenSSH server daemon...
Jan 22 04:44:30 np0005591760 systemd[1]: sshd.service: Deactivated successfully.
Jan 22 04:44:30 np0005591760 systemd[1]: Stopped OpenSSH server daemon.
Jan 22 04:44:30 np0005591760 systemd[1]: sshd.service: Consumed 1.517s CPU time, read 32.0K from disk, written 0B to disk.
Jan 22 04:44:30 np0005591760 systemd[1]: Stopped target sshd-keygen.target.
Jan 22 04:44:30 np0005591760 systemd[1]: Stopping sshd-keygen.target...
Jan 22 04:44:30 np0005591760 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 04:44:30 np0005591760 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 04:44:30 np0005591760 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 04:44:30 np0005591760 systemd[1]: Reached target sshd-keygen.target.
Jan 22 04:44:30 np0005591760 systemd[1]: Starting OpenSSH server daemon...
Jan 22 04:44:30 np0005591760 systemd[1]: Started OpenSSH server daemon.
Jan 22 04:44:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:44:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:44:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:31 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:44:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:31 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:32 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:44:32 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:44:32 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:32 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:32 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:32 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:44:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:32 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:33.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:33 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:44:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:33 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094434 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:44:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:35.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:35 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:35.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:44:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:35 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:36 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:36.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:37.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:37.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:37.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:37.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:37 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:44:37 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:44:37 np0005591760 systemd[1]: man-db-cache-update.service: Consumed 6.502s CPU time.
Jan 22 04:44:37 np0005591760 systemd[1]: run-r8a1db1576ff44b1fa3be529038d4762d.service: Deactivated successfully.
Jan 22 04:44:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:37 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:37] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 22 04:44:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:37] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 22 04:44:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:37.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:44:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:37 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:38 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:39 np0005591760 python3.9[199591]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:44:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:39.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:39 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:39 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:39 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:39 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:39.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:39 np0005591760 python3.9[199781]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:44:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:44:39 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:39 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:40 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:40 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:40 np0005591760 python3.9[199971]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:44:40 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:40 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:40 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:41 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:41 np0005591760 python3.9[200164]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:44:41 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:41 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:41 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:41.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 85 B/s wr, 7 op/s
Jan 22 04:44:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:41 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122000aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:42 np0005591760 python3.9[200355]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:42 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:42 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:42 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:42 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:43.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12280056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:43 np0005591760 python3.9[200546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:43 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:43 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:43 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:43.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 04:44:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:44 np0005591760 python3.9[200739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:44 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:44 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:44 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:44 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1230003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:45 np0005591760 python3.9[200930]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:45.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:45 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12140008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:45 np0005591760 python3.9[201085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:45 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:45 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:45 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:45.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 22 04:44:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:45 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:46 np0005591760 python3.9[201276]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 04:44:46 np0005591760 systemd[1]: Reloading.
Jan 22 04:44:46 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:44:46 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:44:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:46 np0005591760 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 22 04:44:46 np0005591760 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 22 04:44:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:46.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:47.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:47 np0005591760 podman[201470]: 2026-01-22 09:44:47.140854725 +0000 UTC m=+0.040547334 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 04:44:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:47 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1230004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:44:47.299 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:44:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:44:47.299 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:44:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:44:47.300 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:44:47 np0005591760 python3.9[201471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:47] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 22 04:44:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:47] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 22 04:44:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:47 np0005591760 python3.9[201641]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 22 04:44:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:47 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12140008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:48 np0005591760 python3.9[201797]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:48 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:49 np0005591760 python3.9[201953]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:49.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:44:49
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['vms', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.nfs', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:44:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:49 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:44:49 np0005591760 python3.9[202108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:49.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 22 04:44:49 np0005591760 auditd[674]: Audit daemon rotating log files
Jan 22 04:44:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:49 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1230004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:50 np0005591760 python3.9[202264]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:50 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:50 np0005591760 python3.9[202419]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:51 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:51 np0005591760 python3.9[202600]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:51.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:51 np0005591760 python3.9[202755]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 22 04:44:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:51 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214002230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:52 np0005591760 python3.9[202911]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:52 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1230004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:53 np0005591760 python3.9[203067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:53.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:53 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:53 np0005591760 python3.9[203222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:53.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 04:44:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:53 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:54 np0005591760 python3.9[203377]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:54 np0005591760 podman[203533]: 2026-01-22 09:44:54.52734507 +0000 UTC m=+0.057041050 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 04:44:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:54 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:54 np0005591760 python3.9[203534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 04:44:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:55.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:55 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:55 np0005591760 python3.9[203713]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:44:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:55.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:55 np0005591760 python3.9[203865]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:44:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 04:44:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:56 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:56 np0005591760 python3.9[204018]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:44:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:56 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:56 np0005591760 python3.9[204170]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:44:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:57.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:57.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:44:57.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:44:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:57.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:57 np0005591760 python3.9[204323]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:44:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:57 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:57 np0005591760 python3.9[204475]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:44:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:57] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 22 04:44:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:44:57] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 22 04:44:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:57.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:44:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:58 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:58 np0005591760 python3.9[204626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:44:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:44:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:58 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:58 np0005591760 python3.9[204779]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:44:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:44:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:44:59.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:44:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:44:59 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:44:59 np0005591760 python3.9[204904]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075098.395253-1641-173373169911399/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:44:59 np0005591760 python3.9[205056]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:44:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:44:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:44:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:44:59.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:44:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:45:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:00 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:00 np0005591760 python3.9[205182]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075099.4939725-1641-280160585389966/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:00 np0005591760 python3.9[205334]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:00 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:01 np0005591760 python3.9[205460]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075100.3058202-1641-195560246742019/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:01.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:01 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:01 np0005591760 python3.9[205612]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:01.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:01 np0005591760 python3.9[205737]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075101.1762052-1641-6293051933283/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:45:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:02 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:02 np0005591760 python3.9[205890]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:02 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:02 np0005591760 python3.9[206015]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075102.0059183-1641-194407823828833/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:03.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:03 np0005591760 python3.9[206168]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:03 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:03 np0005591760 python3.9[206293]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075102.8597195-1641-78181992136673/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:03.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:45:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:04 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:04 np0005591760 python3.9[206446]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:04 np0005591760 python3.9[206569]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075103.7513998-1641-46845650155594/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:04 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:05 np0005591760 python3.9[206722]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:45:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:05.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:45:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:05 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:05 np0005591760 python3.9[206847]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769075104.679732-1641-90999375495503/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:05.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:05 np0005591760 python3.9[206999]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 22 04:45:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 0 B/s wr, 115 op/s
Jan 22 04:45:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:06 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:06 np0005591760 python3.9[207153]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:06 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:06 np0005591760 python3.9[207306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:06.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:07.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:07.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:07.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:07 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:07 np0005591760 python3.9[207458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:07] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:45:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:07] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:45:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:07.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:07 np0005591760 python3.9[207610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 0 B/s wr, 115 op/s
Jan 22 04:45:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:08 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094508 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:45:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:08 np0005591760 python3.9[207763]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:08 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:08 np0005591760 python3.9[207915]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:09 np0005591760 python3.9[208068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:09 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:09 np0005591760 python3.9[208220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:45:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:09.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:45:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 0 B/s wr, 115 op/s
Jan 22 04:45:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:10 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:10 np0005591760 python3.9[208373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:10 np0005591760 python3.9[208525]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:10 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:11.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:11 np0005591760 python3.9[208703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:11 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12300068d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:11 np0005591760 python3.9[208855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:11.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Jan 22 04:45:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:12 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:12 np0005591760 python3.9[209008]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:12 np0005591760 python3.9[209160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:12 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:13 np0005591760 python3.9[209313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:13 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:13 np0005591760 python3.9[209436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075112.7523859-2304-184093452550339/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:13.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Jan 22 04:45:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:14 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:14 np0005591760 python3.9[209589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:14 np0005591760 python3.9[209712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075113.7642312-2304-206495449659521/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:14 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12240074f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:14 np0005591760 python3.9[209867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:15.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:15 np0005591760 python3.9[209990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075114.5740619-2304-236074721071606/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:15 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:15 np0005591760 python3.9[210142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:15 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:45:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:15.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 85 B/s wr, 119 op/s
Jan 22 04:45:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:16 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:16 np0005591760 python3.9[210266]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075115.3943326-2304-62351650176182/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:16 np0005591760 python3.9[210418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:16 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:16 np0005591760 python3.9[210542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075116.2267344-2304-55002379293578/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:16.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:17.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:17.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:17.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:45:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:17.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:45:17 np0005591760 podman[210666]: 2026-01-22 09:45:17.267089714 +0000 UTC m=+0.039923125 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:45:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:17 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224009030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:17 np0005591760 python3.9[210710]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:17] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:45:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:17] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:45:17 np0005591760 python3.9[210833]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075117.0567634-2304-256045033976945/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:17.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 85 B/s wr, 4 op/s
Jan 22 04:45:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:18 np0005591760 python3.9[210986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:18 np0005591760 python3.9[211109]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075117.929595-2304-49951567088005/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:45:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:45:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:45:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:19 np0005591760 python3.9[211262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:45:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:19 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:45:19 np0005591760 python3.9[211385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075118.808493-2304-119214955816325/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:19.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 85 B/s wr, 4 op/s
Jan 22 04:45:19 np0005591760 python3.9[211537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:20 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224009030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:20 np0005591760 python3.9[211661]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075119.6549497-2304-252369367015090/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:20 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:20 np0005591760 python3.9[211814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:21 np0005591760 python3.9[211937]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075120.5004954-2304-175885963140581/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:21 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:21 np0005591760 python3.9[212089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:21 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:45:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 938 B/s wr, 6 op/s
Jan 22 04:45:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:22 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:22 np0005591760 python3.9[212213]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075121.3638427-2304-239003285030769/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:22 np0005591760 python3.9[212365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:22 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:22 np0005591760 python3.9[212489]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075122.208707-2304-245720771829907/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:23.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:23 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:23 np0005591760 python3.9[212641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:23 np0005591760 python3.9[212764]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075123.010557-2304-213287090393605/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:23.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:45:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:24 np0005591760 python3.9[212917]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:24 np0005591760 python3.9[213040]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075123.900493-2304-153711406207213/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:25 np0005591760 podman[213066]: 2026-01-22 09:45:25.060658859 +0000 UTC m=+0.052973371 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 04:45:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:25.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:25 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:45:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:45:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:25 np0005591760 python3.9[213294]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:25.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:45:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:26 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.098167615 +0000 UTC m=+0.027153048 container create 81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elgamal, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:45:26 np0005591760 systemd[1]: Started libpod-conmon-81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5.scope.
Jan 22 04:45:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:45:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:45:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.151436602 +0000 UTC m=+0.080422045 container init 81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.156280157 +0000 UTC m=+0.085265591 container start 81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.157321563 +0000 UTC m=+0.086306996 container attach 81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 22 04:45:26 np0005591760 wonderful_elgamal[213468]: 167 167
Jan 22 04:45:26 np0005591760 systemd[1]: libpod-81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5.scope: Deactivated successfully.
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.160486041 +0000 UTC m=+0.089471474 container died 81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:45:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b3e3c48ce7c7e6e67216eba4369dfbf0d09b55ff207427b98ab45d20dfc21ccb-merged.mount: Deactivated successfully.
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.17847361 +0000 UTC m=+0.107459044 container remove 81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:45:26 np0005591760 podman[213421]: 2026-01-22 09:45:26.087658072 +0000 UTC m=+0.016643525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:45:26 np0005591760 systemd[1]: libpod-conmon-81ad41dc9de3d9ce953ba46d155914613a90da2d40db8cbcdfec18711b00dac5.scope: Deactivated successfully.
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.295834839 +0000 UTC m=+0.027830084 container create 6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 22 04:45:26 np0005591760 systemd[1]: Started libpod-conmon-6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3.scope.
Jan 22 04:45:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:45:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd2cb72177fbd49a5b2f304b2c8c38f5e857cc8480b26c6b1ad4ec9af00a9b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd2cb72177fbd49a5b2f304b2c8c38f5e857cc8480b26c6b1ad4ec9af00a9b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd2cb72177fbd49a5b2f304b2c8c38f5e857cc8480b26c6b1ad4ec9af00a9b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd2cb72177fbd49a5b2f304b2c8c38f5e857cc8480b26c6b1ad4ec9af00a9b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd2cb72177fbd49a5b2f304b2c8c38f5e857cc8480b26c6b1ad4ec9af00a9b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.344709057 +0000 UTC m=+0.076704312 container init 6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_noether, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.352916689 +0000 UTC m=+0.084911934 container start 6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.354425013 +0000 UTC m=+0.086420259 container attach 6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.283937048 +0000 UTC m=+0.015932313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:45:26 np0005591760 angry_noether[213524]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:45:26 np0005591760 angry_noether[213524]: --> All data devices are unavailable
Jan 22 04:45:26 np0005591760 python3.9[213584]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 22 04:45:26 np0005591760 systemd[1]: libpod-6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3.scope: Deactivated successfully.
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.620375417 +0000 UTC m=+0.352370662 container died 6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:45:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-cdd2cb72177fbd49a5b2f304b2c8c38f5e857cc8480b26c6b1ad4ec9af00a9b8-merged.mount: Deactivated successfully.
Jan 22 04:45:26 np0005591760 podman[213490]: 2026-01-22 09:45:26.648492741 +0000 UTC m=+0.380487987 container remove 6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_noether, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:45:26 np0005591760 systemd[1]: libpod-conmon-6956e321bb24903a54c2d994e213fc52f813aa7fb808b545c44f673c117450b3.scope: Deactivated successfully.
Jan 22 04:45:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:26 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:26.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:27.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:27.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:27.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.066012297 +0000 UTC m=+0.031811824 container create 1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:45:27 np0005591760 systemd[1]: Started libpod-conmon-1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740.scope.
Jan 22 04:45:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.111234296 +0000 UTC m=+0.077033824 container init 1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.115731289 +0000 UTC m=+0.081530816 container start 1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.118219092 +0000 UTC m=+0.084018618 container attach 1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:45:27 np0005591760 romantic_jang[213701]: 167 167
Jan 22 04:45:27 np0005591760 systemd[1]: libpod-1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740.scope: Deactivated successfully.
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.119286034 +0000 UTC m=+0.085085561 container died 1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:45:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-546ea42c1c694b5eaf1a2b21de9f4a9cf7540f862a6b58d4b54376c23d8ccc67-merged.mount: Deactivated successfully.
Jan 22 04:45:27 np0005591760 dbus-broker-launch[735]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.147671085 +0000 UTC m=+0.113470611 container remove 1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 04:45:27 np0005591760 podman[213688]: 2026-01-22 09:45:27.051883458 +0000 UTC m=+0.017683005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:45:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:27.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:27 np0005591760 systemd[1]: libpod-conmon-1dd63140e15aa3a7c3965d5d21e05c68c495985115abf8239a72ae05c11a0740.scope: Deactivated successfully.
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.278979382 +0000 UTC m=+0.039002569 container create 484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:45:27 np0005591760 systemd[1]: Started libpod-conmon-484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5.scope.
Jan 22 04:45:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:27 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218004f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:45:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9cdbcc51a9137f9839c015d9d78745bd1f461839881093e4af8c9218de9503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9cdbcc51a9137f9839c015d9d78745bd1f461839881093e4af8c9218de9503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9cdbcc51a9137f9839c015d9d78745bd1f461839881093e4af8c9218de9503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9cdbcc51a9137f9839c015d9d78745bd1f461839881093e4af8c9218de9503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.355944826 +0000 UTC m=+0.115968033 container init 484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.261537922 +0000 UTC m=+0.021561129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.3605011 +0000 UTC m=+0.120524287 container start 484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.363866818 +0000 UTC m=+0.123890015 container attach 484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]: {
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:    "0": [
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:        {
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "devices": [
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "/dev/loop3"
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            ],
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "lv_name": "ceph_lv0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "lv_size": "21470642176",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "name": "ceph_lv0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "tags": {
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.cluster_name": "ceph",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.crush_device_class": "",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.encrypted": "0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.osd_id": "0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.type": "block",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.vdo": "0",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:                "ceph.with_tpm": "0"
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            },
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "type": "block",
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:            "vg_name": "ceph_vg0"
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:        }
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]:    ]
Jan 22 04:45:27 np0005591760 distracted_shannon[213739]: }
Jan 22 04:45:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:27] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Jan 22 04:45:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:27] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Jan 22 04:45:27 np0005591760 systemd[1]: libpod-484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5.scope: Deactivated successfully.
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.600216827 +0000 UTC m=+0.360240015 container died 484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 04:45:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7e9cdbcc51a9137f9839c015d9d78745bd1f461839881093e4af8c9218de9503-merged.mount: Deactivated successfully.
Jan 22 04:45:27 np0005591760 podman[213724]: 2026-01-22 09:45:27.622124199 +0000 UTC m=+0.382147386 container remove 484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:45:27 np0005591760 systemd[1]: libpod-conmon-484a452d192af368336525ea08e7f480e8eee199ecba0a433f4ad3940be4e9b5.scope: Deactivated successfully.
Jan 22 04:45:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:27.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:27 np0005591760 python3.9[213932]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:45:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.055202873 +0000 UTC m=+0.030189354 container create c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:45:28 np0005591760 systemd[1]: Started libpod-conmon-c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59.scope.
Jan 22 04:45:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:45:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094528 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.108458947 +0000 UTC m=+0.083445438 container init c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bartik, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.11449912 +0000 UTC m=+0.089485601 container start c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.115663355 +0000 UTC m=+0.090649836 container attach c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:45:28 np0005591760 awesome_bartik[214103]: 167 167
Jan 22 04:45:28 np0005591760 systemd[1]: libpod-c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59.scope: Deactivated successfully.
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.117893823 +0000 UTC m=+0.092880303 container died c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bartik, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:45:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c332001a94217b4d69bddd52994a1788175398d18b2eef4e031fdd4fefa079dc-merged.mount: Deactivated successfully.
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.135355049 +0000 UTC m=+0.110341530 container remove c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:45:28 np0005591760 podman[214061]: 2026-01-22 09:45:28.042740548 +0000 UTC m=+0.017727039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:45:28 np0005591760 systemd[1]: libpod-conmon-c267f9692b9b87f6949d2e1ad4fea64f6af60f5498498ebc8713bb659235fc59.scope: Deactivated successfully.
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.260932656 +0000 UTC m=+0.030217718 container create a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:45:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:28 np0005591760 systemd[1]: Started libpod-conmon-a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48.scope.
Jan 22 04:45:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:45:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0743a76f83821fecefe182e9355e3d3084b00003a2d62db8f5762e6683f9a32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0743a76f83821fecefe182e9355e3d3084b00003a2d62db8f5762e6683f9a32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0743a76f83821fecefe182e9355e3d3084b00003a2d62db8f5762e6683f9a32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0743a76f83821fecefe182e9355e3d3084b00003a2d62db8f5762e6683f9a32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.327005033 +0000 UTC m=+0.096290105 container init a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.33258039 +0000 UTC m=+0.101865452 container start a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.333729137 +0000 UTC m=+0.103014198 container attach a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.249213531 +0000 UTC m=+0.018498613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:45:28 np0005591760 python3.9[214171]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:28 np0005591760 python3.9[214390]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:28 np0005591760 lvm[214418]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:45:28 np0005591760 lvm[214418]: VG ceph_vg0 finished
Jan 22 04:45:28 np0005591760 nice_nobel[214190]: {}
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.87920157 +0000 UTC m=+0.648486632 container died a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_nobel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:45:28 np0005591760 systemd[1]: libpod-a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48.scope: Deactivated successfully.
Jan 22 04:45:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0743a76f83821fecefe182e9355e3d3084b00003a2d62db8f5762e6683f9a32f-merged.mount: Deactivated successfully.
Jan 22 04:45:28 np0005591760 podman[214177]: 2026-01-22 09:45:28.903229423 +0000 UTC m=+0.672514484 container remove a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:45:28 np0005591760 systemd[1]: libpod-conmon-a03b9e25e38a943750466d290c176fa0c94d3b4d230dd2ce3fbd86ed0a5afa48.scope: Deactivated successfully.
Jan 22 04:45:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:45:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:45:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:29 np0005591760 python3.9[214606]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:29 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:29.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:29 np0005591760 python3.9[214758]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:45:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:45:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:30 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214003740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:30 np0005591760 python3.9[214911]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:30 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:30 np0005591760 python3.9[215063]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:31.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:31 np0005591760 python3.9[215241]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:31 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:31 np0005591760 python3.9[215393]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:31.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:45:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:32 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:32 np0005591760 python3.9[215545]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:32 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:32 np0005591760 python3.9[215698]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:45:32 np0005591760 systemd[1]: Reloading.
Jan 22 04:45:32 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:45:32 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:45:33 np0005591760 systemd[1]: Starting libvirt logging daemon socket...
Jan 22 04:45:33 np0005591760 systemd[1]: Listening on libvirt logging daemon socket.
Jan 22 04:45:33 np0005591760 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 22 04:45:33 np0005591760 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 22 04:45:33 np0005591760 systemd[1]: Starting libvirt logging daemon...
Jan 22 04:45:33 np0005591760 systemd[1]: Started libvirt logging daemon.
Jan 22 04:45:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:33.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:33 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:33.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:33 np0005591760 python3.9[215892]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:45:33 np0005591760 systemd[1]: Reloading.
Jan 22 04:45:33 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:45:33 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:45:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:45:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218004f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 22 04:45:34 np0005591760 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 22 04:45:34 np0005591760 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 22 04:45:34 np0005591760 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 04:45:34 np0005591760 systemd[1]: Started libvirt nodedev daemon.
Jan 22 04:45:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:34 np0005591760 python3.9[216109]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:45:34 np0005591760 systemd[1]: Reloading.
Jan 22 04:45:34 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:45:34 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 22 04:45:34 np0005591760 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 22 04:45:34 np0005591760 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 22 04:45:34 np0005591760 systemd[1]: Starting libvirt proxy daemon...
Jan 22 04:45:35 np0005591760 systemd[1]: Started libvirt proxy daemon.
Jan 22 04:45:35 np0005591760 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 22 04:45:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:35.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:35 np0005591760 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 22 04:45:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:35 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:35 np0005591760 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 22 04:45:35 np0005591760 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 22 04:45:35 np0005591760 python3.9[216322]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:45:35 np0005591760 systemd[1]: Reloading.
Jan 22 04:45:35 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:45:35 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:45:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:35.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:35 np0005591760 systemd[1]: Listening on libvirt locking daemon socket.
Jan 22 04:45:35 np0005591760 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 22 04:45:35 np0005591760 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 22 04:45:35 np0005591760 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 22 04:45:35 np0005591760 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 22 04:45:35 np0005591760 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 22 04:45:35 np0005591760 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 22 04:45:35 np0005591760 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 22 04:45:35 np0005591760 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 22 04:45:35 np0005591760 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 22 04:45:35 np0005591760 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 04:45:35 np0005591760 systemd[1]: Started libvirt QEMU daemon.
Jan 22 04:45:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:45:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:36 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094536 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:45:36 np0005591760 setroubleshoot[216170]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 98fbf5a6-66fd-4bce-ba92-88bb4a96025c
Jan 22 04:45:36 np0005591760 setroubleshoot[216170]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 04:45:36 np0005591760 setroubleshoot[216170]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 98fbf5a6-66fd-4bce-ba92-88bb4a96025c
Jan 22 04:45:36 np0005591760 setroubleshoot[216170]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 04:45:36 np0005591760 python3.9[216548]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:45:36 np0005591760 systemd[1]: Reloading.
Jan 22 04:45:36 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:45:36 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:45:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:36 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218004f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:36 np0005591760 systemd[1]: Starting libvirt secret daemon socket...
Jan 22 04:45:36 np0005591760 systemd[1]: Listening on libvirt secret daemon socket.
Jan 22 04:45:36 np0005591760 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 22 04:45:36 np0005591760 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 22 04:45:36 np0005591760 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 22 04:45:36 np0005591760 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 22 04:45:36 np0005591760 systemd[1]: Starting libvirt secret daemon...
Jan 22 04:45:36 np0005591760 systemd[1]: Started libvirt secret daemon.
Jan 22 04:45:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:36.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:37.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:37.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:37.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:37.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:37 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:37] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Jan 22 04:45:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:37] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Jan 22 04:45:37 np0005591760 python3.9[216761]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:45:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:38 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:38 np0005591760 python3.9[216914]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 04:45:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:38 np0005591760 python3.9[217068]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:38 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:39.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:39 np0005591760 python3.9[217223]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 04:45:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:39 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c004da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:39.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:39 np0005591760 python3.9[217373]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:45:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:40 np0005591760 python3.9[217495]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075139.5511541-3378-23984375303365/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ee8dac29edb10d989fc7d8a43619a77a19a44d77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180051b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:40 np0005591760 python3.9[217647]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 43df7a30-cf5f-5209-adfd-bf44298b19f2#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:41.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:41 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1224008020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:41 np0005591760 python3.9[217810]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:41.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:45:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:42 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c004da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:42 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:42 np0005591760 python3.9[218275]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:43.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180051d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:43 np0005591760 python3.9[218427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:43 np0005591760 python3.9[218550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075143.1188457-3543-233325279792620/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:43.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:45:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:45:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:44 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122400d030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:44 np0005591760 python3.9[218703]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:44 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c005ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:44 np0005591760 python3.9[218856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:45.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:45 np0005591760 python3.9[218934]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:45 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:45 np0005591760 python3.9[219086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:45.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:45:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:46 np0005591760 python3.9[219165]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1if_4yrr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:46 np0005591760 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 22 04:45:46 np0005591760 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 22 04:45:46 np0005591760 python3.9[219317]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122400d030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:46 np0005591760 python3.9[219396]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:45:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:45:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:46.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:47.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:47.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:47.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:47.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:45:47.300 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:45:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:45:47.300 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:45:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:45:47.300 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:45:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:47 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c005ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:47 np0005591760 podman[219548]: 2026-01-22 09:45:47.34237432 +0000 UTC m=+0.043213642 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:45:47 np0005591760 python3.9[219549]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:47] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Jan 22 04:45:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:47] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Jan 22 04:45:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:47.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:45:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:48 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:48 np0005591760 python3[219717]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 04:45:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:48 np0005591760 python3.9[219870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:48 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180053b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:49 np0005591760 python3.9[219949]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:49.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:45:49
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', '.nfs', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data']
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:45:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:49 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122400d450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:45:49 np0005591760 python3.9[220101]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:45:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:45:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:49 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:45:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:45:50 np0005591760 python3.9[220226]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075149.2076197-3810-250985490097364/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:50 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c005ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:50 np0005591760 python3.9[220379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:50 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c005ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:50 np0005591760 python3.9[220458]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:51.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:51 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 04:45:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:51 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180053d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:51 np0005591760 python3.9[220635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:51 np0005591760 python3.9[220713]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:45:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:52 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:52 np0005591760 python3.9[220866]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:52 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f122400c020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:52 np0005591760 python3.9[220991]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769075151.8682199-3927-249383117935598/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:53 np0005591760 python3.9[221144]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:53 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:53 np0005591760 python3.9[221296]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:45:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:54 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:54 np0005591760 python3.9[221452]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:54 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:54 np0005591760 python3.9[221605]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:55.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:55 np0005591760 podman[221730]: 2026-01-22 09:45:55.234770546 +0000 UTC m=+0.055207796 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:45:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:55 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:55 np0005591760 python3.9[221774]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:45:55 np0005591760 python3.9[221935]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:45:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:55.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:45:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:56 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094556 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:45:56 np0005591760 python3.9[222091]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:56 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:56 np0005591760 python3.9[222244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:56.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:57.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:57.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:45:57.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:45:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:57 np0005591760 python3.9[222367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075156.5181043-4143-2260209200039/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:57 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180053f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:57] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Jan 22 04:45:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:45:57] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Jan 22 04:45:57 np0005591760 python3.9[222519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:57.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:45:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:58 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180053f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:58 np0005591760 python3.9[222643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075157.4037554-4188-214069858048695/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:45:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:58 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12180053f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:58 np0005591760 python3.9[222795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:45:59 np0005591760 python3.9[222919]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075158.351732-4233-188508146693350/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:45:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:45:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:45:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:45:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:45:59 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:45:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:45:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:45:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:45:59.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:45:59 np0005591760 python3.9[223071]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:45:59 np0005591760 systemd[1]: Reloading.
Jan 22 04:45:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:45:59 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:45:59 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:00 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:00 np0005591760 systemd[1]: Reached target edpm_libvirt.target.
Jan 22 04:46:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:00 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:00 np0005591760 python3.9[223263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 04:46:00 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:00 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:00 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:01 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:01.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:01 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:01 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:01 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:01 np0005591760 systemd[1]: session-52.scope: Deactivated successfully.
Jan 22 04:46:01 np0005591760 systemd[1]: session-52.scope: Consumed 2min 25.849s CPU time.
Jan 22 04:46:01 np0005591760 systemd-logind[747]: Session 52 logged out. Waiting for processes to exit.
Jan 22 04:46:01 np0005591760 systemd-logind[747]: Removed session 52.
Jan 22 04:46:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:01.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:46:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:02 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094602 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:46:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:02 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:46:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:46:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:03 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c006fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:46:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:04 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c008040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:04 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:05 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:46:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:06 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:06 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c008040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:06.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:07.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:07.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:07.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:07 np0005591760 systemd-logind[747]: New session 53 of user zuul.
Jan 22 04:46:07 np0005591760 systemd[1]: Started Session 53 of User zuul.
Jan 22 04:46:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:07 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:07] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:07] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:46:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:07.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:46:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:46:08 np0005591760 python3.9[223520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:46:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:08 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:08 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12400c1a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:09 np0005591760 python3.9[223676]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:46:09 np0005591760 network[223693]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:46:09 np0005591760 network[223694]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:46:09 np0005591760 network[223695]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:46:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:09 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:46:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:10 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:46:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:10 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:10 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:11.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:11 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:46:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:12 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:12 np0005591760 python3.9[223996]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 04:46:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:12 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:13 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:46:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:13 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:46:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:13 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:13 np0005591760 python3.9[224081]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:46:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:46:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:14 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:14 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:15 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:15.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:46:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:16 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:46:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:16 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:16 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:16.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:17.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:17.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:17.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:17 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:17] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:17] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094617 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:46:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:17.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:46:18 np0005591760 podman[224164]: 2026-01-22 09:46:18.053408974 +0000 UTC m=+0.043926225 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 04:46:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:18 np0005591760 python3.9[224255]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:46:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:18 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:18 np0005591760 python3.9[224408]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:46:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:19.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:46:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:19 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:46:19 np0005591760 python3.9[224561]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:46:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 22 04:46:20 np0005591760 python3.9[224713]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:46:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:20 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:20 np0005591760 python3.9[224867]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:46:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:20 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:20 np0005591760 python3.9[224991]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075180.1692455-240-49874240745660/.source.iscsi _original_basename=.o2it8dek follow=False checksum=de8b51c1f1da0d0b9ba384e9df40149f76554d35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:21.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:21 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:21 np0005591760 python3.9[225143]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:21.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:46:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:22 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094622 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:46:22 np0005591760 python3.9[225296]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:22 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:23 np0005591760 python3.9[225449]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:23 np0005591760 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 22 04:46:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:23.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:23 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:23 np0005591760 python3.9[225605]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:23 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:23 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:23 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:23.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Jan 22 04:46:24 np0005591760 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 04:46:24 np0005591760 systemd[1]: Starting Open-iSCSI...
Jan 22 04:46:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:24 np0005591760 kernel: Loading iSCSI transport class v2.0-870.
Jan 22 04:46:24 np0005591760 systemd[1]: Started Open-iSCSI.
Jan 22 04:46:24 np0005591760 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 22 04:46:24 np0005591760 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 22 04:46:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:24 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:24 np0005591760 python3.9[225805]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:46:24 np0005591760 network[225822]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:46:24 np0005591760 network[225823]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:46:24 np0005591760 network[225824]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:46:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:25.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:25 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:25 np0005591760 podman[225832]: 2026-01-22 09:46:25.627432845 +0000 UTC m=+0.094033218 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 04:46:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:46:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:26 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:26 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:46:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:26 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:26.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:27.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:27.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:27.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:27.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:27 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:27 np0005591760 python3.9[226122]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:46:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:27] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:27] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:46:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:28 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:29.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:29 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:29 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:46:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:29 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:46:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:29 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:46:29 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:46:29 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:29 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:29 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:29.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:46:30 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:46:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:30 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:30 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:46:30 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:46:30 np0005591760 systemd[1]: run-r3e0385871602410e8aa4ed68ee56f208.service: Deactivated successfully.
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:30 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.414262413 +0000 UTC m=+0.030008169 container create 9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_banzai, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 04:46:30 np0005591760 systemd[1]: Started libpod-conmon-9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188.scope.
Jan 22 04:46:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.478862066 +0000 UTC m=+0.094607841 container init 9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.485474764 +0000 UTC m=+0.101220518 container start 9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.486772361 +0000 UTC m=+0.102518116 container attach 9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:46:30 np0005591760 trusting_banzai[226464]: 167 167
Jan 22 04:46:30 np0005591760 systemd[1]: libpod-9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188.scope: Deactivated successfully.
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.489302195 +0000 UTC m=+0.105047950 container died 9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.400652354 +0000 UTC m=+0.016398130 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2aa1cee9e621d67c8b6e8a247248566a357012cfa6f262a82696c57708a0ab85-merged.mount: Deactivated successfully.
Jan 22 04:46:30 np0005591760 podman[226451]: 2026-01-22 09:46:30.511595316 +0000 UTC m=+0.127341071 container remove 9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:46:30 np0005591760 systemd[1]: libpod-conmon-9aaff4632e04fd5d0904d8388b4e53ff4e77f5c6908766a5e8cb659a85072188.scope: Deactivated successfully.
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.638719336 +0000 UTC m=+0.032326323 container create 421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ganguly, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:46:30 np0005591760 systemd[1]: Started libpod-conmon-421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c.scope.
Jan 22 04:46:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:46:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e997a04d659dd4ba7292223ce823beac6556088e253e186e132604c741cf3718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e997a04d659dd4ba7292223ce823beac6556088e253e186e132604c741cf3718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e997a04d659dd4ba7292223ce823beac6556088e253e186e132604c741cf3718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e997a04d659dd4ba7292223ce823beac6556088e253e186e132604c741cf3718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e997a04d659dd4ba7292223ce823beac6556088e253e186e132604c741cf3718/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.698775176 +0000 UTC m=+0.092382163 container init 421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.705099999 +0000 UTC m=+0.098706976 container start 421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ganguly, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.706267241 +0000 UTC m=+0.099874218 container attach 421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.624667634 +0000 UTC m=+0.018274631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:30 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:30 np0005591760 thirsty_ganguly[226547]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:46:30 np0005591760 thirsty_ganguly[226547]: --> All data devices are unavailable
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.97161829 +0000 UTC m=+0.365225277 container died 421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:46:30 np0005591760 systemd[1]: libpod-421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c.scope: Deactivated successfully.
Jan 22 04:46:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e997a04d659dd4ba7292223ce823beac6556088e253e186e132604c741cf3718-merged.mount: Deactivated successfully.
Jan 22 04:46:30 np0005591760 podman[226510]: 2026-01-22 09:46:30.998520949 +0000 UTC m=+0.392127926 container remove 421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ganguly, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:46:31 np0005591760 systemd[1]: libpod-conmon-421be8bde1765b49a0257698f5cea99012e3f87c949d14670e847ea8980ce22c.scope: Deactivated successfully.
Jan 22 04:46:31 np0005591760 python3.9[226665]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 04:46:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:31.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:31 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.437606609 +0000 UTC m=+0.031445290 container create 5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:46:31 np0005591760 systemd[1]: Started libpod-conmon-5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd.scope.
Jan 22 04:46:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.48676593 +0000 UTC m=+0.080604601 container init 5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.492959947 +0000 UTC m=+0.086798617 container start 5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.494548674 +0000 UTC m=+0.088387346 container attach 5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 22 04:46:31 np0005591760 magical_kalam[226944]: 167 167
Jan 22 04:46:31 np0005591760 systemd[1]: libpod-5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd.scope: Deactivated successfully.
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.496976034 +0000 UTC m=+0.090814725 container died 5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kalam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:46:31 np0005591760 systemd[1]: var-lib-containers-storage-overlay-63b36ef0544034b6ad166ebc825a399633b4bad930c4acfa9de9f42fd7b326b3-merged.mount: Deactivated successfully.
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.5201781 +0000 UTC m=+0.114016771 container remove 5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kalam, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:46:31 np0005591760 podman[226887]: 2026-01-22 09:46:31.423051769 +0000 UTC m=+0.016890460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:31 np0005591760 systemd[1]: libpod-conmon-5134ea05e4697338dd10d2db32a62418c50eac246a67459badbbb864136a47dd.scope: Deactivated successfully.
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.642637039 +0000 UTC m=+0.029184825 container create ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_ritchie, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:46:31 np0005591760 python3.9[226948]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 22 04:46:31 np0005591760 systemd[1]: Started libpod-conmon-ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d.scope.
Jan 22 04:46:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:46:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb5e0135d77cacdd794837ce153df5b7ef19fec1064af43f338737d32d7d893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb5e0135d77cacdd794837ce153df5b7ef19fec1064af43f338737d32d7d893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb5e0135d77cacdd794837ce153df5b7ef19fec1064af43f338737d32d7d893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb5e0135d77cacdd794837ce153df5b7ef19fec1064af43f338737d32d7d893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.696742782 +0000 UTC m=+0.083290578 container init ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_ritchie, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.701814361 +0000 UTC m=+0.088362147 container start ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.704375273 +0000 UTC m=+0.090923079 container attach ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.630715365 +0000 UTC m=+0.017263172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:31.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]: {
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:    "0": [
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:        {
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "devices": [
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "/dev/loop3"
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            ],
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "lv_name": "ceph_lv0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "lv_size": "21470642176",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "name": "ceph_lv0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "tags": {
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.cluster_name": "ceph",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.crush_device_class": "",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.encrypted": "0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.osd_id": "0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.type": "block",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.vdo": "0",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:                "ceph.with_tpm": "0"
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            },
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "type": "block",
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:            "vg_name": "ceph_vg0"
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:        }
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]:    ]
Jan 22 04:46:31 np0005591760 naughty_ritchie[226986]: }
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.939146716 +0000 UTC m=+0.325694502 container died ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_ritchie, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:46:31 np0005591760 systemd[1]: libpod-ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d.scope: Deactivated successfully.
Jan 22 04:46:31 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6bb5e0135d77cacdd794837ce153df5b7ef19fec1064af43f338737d32d7d893-merged.mount: Deactivated successfully.
Jan 22 04:46:31 np0005591760 podman[226969]: 2026-01-22 09:46:31.967909994 +0000 UTC m=+0.354457771 container remove ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:46:31 np0005591760 systemd[1]: libpod-conmon-ca5e5d50758c8011231d81c5683f54c98cc94496bb3467e8a3d431bcb59fe01d.scope: Deactivated successfully.
Jan 22 04:46:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:46:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:32 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:32 np0005591760 python3.9[227157]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.385616187 +0000 UTC m=+0.026785418 container create d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:46:32 np0005591760 systemd[1]: Started libpod-conmon-d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a.scope.
Jan 22 04:46:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.442771905 +0000 UTC m=+0.083941157 container init d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.447678594 +0000 UTC m=+0.088847824 container start d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.448873378 +0000 UTC m=+0.090042608 container attach d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:46:32 np0005591760 interesting_banach[227374]: 167 167
Jan 22 04:46:32 np0005591760 systemd[1]: libpod-d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a.scope: Deactivated successfully.
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.452226414 +0000 UTC m=+0.093395654 container died d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:46:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-377ad03e956869d88d3029af518385ab3b6735fac02865260f0b584ac03ac9a7-merged.mount: Deactivated successfully.
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.468540665 +0000 UTC m=+0.109709895 container remove d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:46:32 np0005591760 podman[227328]: 2026-01-22 09:46:32.3749378 +0000 UTC m=+0.016107051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:32 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:46:32 np0005591760 systemd[1]: libpod-conmon-d6f79444538af8af30b496b531775944bc9c4d331e9b147c96bf27f94ce6518a.scope: Deactivated successfully.
Jan 22 04:46:32 np0005591760 python3.9[227376]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075191.7947652-504-206710775415254/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:32 np0005591760 podman[227397]: 2026-01-22 09:46:32.59353642 +0000 UTC m=+0.028718716 container create ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:46:32 np0005591760 systemd[1]: Started libpod-conmon-ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8.scope.
Jan 22 04:46:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:46:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d73931ffecc6cf44121fff51fe6fe681379e2992761ec276303cf5d27b41f72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d73931ffecc6cf44121fff51fe6fe681379e2992761ec276303cf5d27b41f72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d73931ffecc6cf44121fff51fe6fe681379e2992761ec276303cf5d27b41f72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d73931ffecc6cf44121fff51fe6fe681379e2992761ec276303cf5d27b41f72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:32 np0005591760 podman[227397]: 2026-01-22 09:46:32.657008896 +0000 UTC m=+0.092191212 container init ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ptolemy, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:46:32 np0005591760 podman[227397]: 2026-01-22 09:46:32.665401952 +0000 UTC m=+0.100584248 container start ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ptolemy, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:46:32 np0005591760 podman[227397]: 2026-01-22 09:46:32.666615601 +0000 UTC m=+0.101797897 container attach ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:46:32 np0005591760 podman[227397]: 2026-01-22 09:46:32.581652678 +0000 UTC m=+0.016834994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:32 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:33 np0005591760 python3.9[227603]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:33 np0005591760 angry_ptolemy[227419]: {}
Jan 22 04:46:33 np0005591760 lvm[227639]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:46:33 np0005591760 lvm[227639]: VG ceph_vg0 finished
Jan 22 04:46:33 np0005591760 systemd[1]: libpod-ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8.scope: Deactivated successfully.
Jan 22 04:46:33 np0005591760 podman[227397]: 2026-01-22 09:46:33.185295076 +0000 UTC m=+0.620477372 container died ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ptolemy, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:46:33 np0005591760 lvm[227659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:46:33 np0005591760 lvm[227659]: VG ceph_vg0 finished
Jan 22 04:46:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6d73931ffecc6cf44121fff51fe6fe681379e2992761ec276303cf5d27b41f72-merged.mount: Deactivated successfully.
Jan 22 04:46:33 np0005591760 podman[227397]: 2026-01-22 09:46:33.212912452 +0000 UTC m=+0.648094748 container remove ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:46:33 np0005591760 systemd[1]: libpod-conmon-ab1eb86b308e552a2c5c0fbe043e5123d52a4856dac2976487f9542d5419ebe8.scope: Deactivated successfully.
Jan 22 04:46:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:33.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:46:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:46:33 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:33 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:46:33 np0005591760 python3.9[227828]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:46:34 np0005591760 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 04:46:34 np0005591760 systemd[1]: Stopped Load Kernel Modules.
Jan 22 04:46:34 np0005591760 systemd[1]: Stopping Load Kernel Modules...
Jan 22 04:46:34 np0005591760 systemd[1]: Starting Load Kernel Modules...
Jan 22 04:46:34 np0005591760 systemd[1]: Finished Load Kernel Modules.
Jan 22 04:46:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c008040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:34 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:46:34 np0005591760 python3.9[227985]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:46:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:34 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:35 np0005591760 python3.9[228139]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:46:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:35 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:35 np0005591760 python3.9[228291]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:46:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:35.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:46:36 np0005591760 python3.9[228414]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075195.3586457-657-154828799375631/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:36 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:36 np0005591760 python3.9[228567]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:46:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:36 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:36 np0005591760 python3.9[228721]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:37.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:37.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:37.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:37.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:37.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:37 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:37] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:37] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094637 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:46:37 np0005591760 python3.9[228873]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 04:46:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 04:46:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:46:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:38 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1214001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:38 np0005591760 python3.9[229026]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:38 np0005591760 python3.9[229178]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:38 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c008040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:39 np0005591760 python3.9[229332]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:39.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:39 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c008040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:39 np0005591760 python3.9[229484]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:39 np0005591760 python3.9[229636]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=cleanup t=2026-01-22T09:46:39.965711236Z level=info msg="Completed cleanup jobs" duration=4.161079ms
Jan 22 04:46:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:46:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=plugins.update.checker t=2026-01-22T09:46:40.060000015Z level=info msg="Update check succeeded" duration=38.094985ms
Jan 22 04:46:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafana.update.checker t=2026-01-22T09:46:40.060448871Z level=info msg="Update check succeeded" duration=39.967037ms
Jan 22 04:46:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:40 np0005591760 python3.9[229789]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:46:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:40 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:40 np0005591760 python3.9[229944]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:46:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:41.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:41 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:41 np0005591760 python3.9[230097]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:41 np0005591760 systemd[1]: Listening on multipathd control socket.
Jan 22 04:46:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:41.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:46:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:42 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:42 np0005591760 python3.9[230254]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:42 np0005591760 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 22 04:46:42 np0005591760 udevadm[230259]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 22 04:46:42 np0005591760 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 22 04:46:42 np0005591760 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 04:46:42 np0005591760 multipathd[230262]: --------start up--------
Jan 22 04:46:42 np0005591760 multipathd[230262]: read /etc/multipath.conf
Jan 22 04:46:42 np0005591760 multipathd[230262]: path checkers start up
Jan 22 04:46:42 np0005591760 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 04:46:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:42 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:43 np0005591760 python3.9[230422]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 04:46:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:43 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:43 np0005591760 python3.9[230574]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 22 04:46:43 np0005591760 kernel: Key type psk registered
Jan 22 04:46:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:43.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:46:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:44 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c008040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:44 np0005591760 python3.9[230738]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:46:44 np0005591760 python3.9[230861]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769075203.790789-1047-239708640458104/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:44 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12500031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:45 np0005591760 python3.9[231014]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:45.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:45 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248005d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:45 np0005591760 python3.9[231166]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:46:45 np0005591760 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 04:46:45 np0005591760 systemd[1]: Stopped Load Kernel Modules.
Jan 22 04:46:45 np0005591760 systemd[1]: Stopping Load Kernel Modules...
Jan 22 04:46:45 np0005591760 systemd[1]: Starting Load Kernel Modules...
Jan 22 04:46:45 np0005591760 systemd[1]: Finished Load Kernel Modules.
Jan 22 04:46:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:45.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:46:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:46:46 np0005591760 python3.9[231323]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 04:46:46 np0005591760 kernel: ganesha.nfsd[172112]: segfault at 50 ip 00007f12aa59232e sp 00007f12397f9210 error 4 in libntirpc.so.5.8[7f12aa577000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 22 04:46:46 np0005591760 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 22 04:46:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[171928]: 22/01/2026 09:46:46 : epoch 6971f140 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1218005450 fd 48 proxy ignored for local
Jan 22 04:46:46 np0005591760 systemd[1]: Started Process Core Dump (PID 231326/UID 0).
Jan 22 04:46:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:47.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:47.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:47.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:47.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:47.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:46:47.301 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:46:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:46:47.301 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:46:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:46:47.301 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:46:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:47] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:47] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 22 04:46:47 np0005591760 systemd-coredump[231327]: Process 171933 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f12aa59232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007f12aa59c900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Jan 22 04:46:47 np0005591760 systemd[1]: systemd-coredump@2-231326-0.service: Deactivated successfully.
Jan 22 04:46:47 np0005591760 systemd[1]: systemd-coredump@2-231326-0.service: Consumed 1.027s CPU time.
Jan 22 04:46:47 np0005591760 podman[231335]: 2026-01-22 09:46:47.91061367 +0000 UTC m=+0.018869973 container died 46d5950a52b3690cb19824ace85936eab3706e2eeb389f6f32b1f8abcb394b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:46:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:47.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:47 np0005591760 systemd[1]: var-lib-containers-storage-overlay-651b8bb6a59cba2c5bc7e4b8122a1158f6fe5c6cf639d0e3129f829aaf3e1e86-merged.mount: Deactivated successfully.
Jan 22 04:46:47 np0005591760 podman[231335]: 2026-01-22 09:46:47.936384814 +0000 UTC m=+0.044641097 container remove 46d5950a52b3690cb19824ace85936eab3706e2eeb389f6f32b1f8abcb394b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:46:47 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Main process exited, code=exited, status=139/n/a
Jan 22 04:46:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:46:48 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Failed with result 'exit-code'.
Jan 22 04:46:48 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.110s CPU time.
Jan 22 04:46:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:48 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:48 np0005591760 podman[231370]: 2026-01-22 09:46:48.350340762 +0000 UTC m=+0.041386796 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:46:48 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:48 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:48 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:48 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:48 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:48 np0005591760 systemd-logind[747]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 04:46:49 np0005591760 systemd-logind[747]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 04:46:49 np0005591760 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 04:46:49 np0005591760 systemd[1]: Starting man-db-cache-update.service...
Jan 22 04:46:49 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:49 np0005591760 lvm[231520]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:46:49 np0005591760 lvm[231520]: VG ceph_vg0 finished
Jan 22 04:46:49 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:49 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:46:49
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', '.nfs', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms']
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:46:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:49.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:46:49 np0005591760 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:46:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:46:50 np0005591760 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 04:46:50 np0005591760 systemd[1]: Finished man-db-cache-update.service.
Jan 22 04:46:50 np0005591760 systemd[1]: man-db-cache-update.service: Consumed 1.079s CPU time.
Jan 22 04:46:50 np0005591760 systemd[1]: run-rd4a962a451e442f780a6b8d1b339982c.service: Deactivated successfully.
Jan 22 04:46:50 np0005591760 python3.9[232865]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:46:50 np0005591760 systemd[1]: Stopping Open-iSCSI...
Jan 22 04:46:50 np0005591760 iscsid[225646]: iscsid shutting down.
Jan 22 04:46:50 np0005591760 systemd[1]: iscsid.service: Deactivated successfully.
Jan 22 04:46:50 np0005591760 systemd[1]: Stopped Open-iSCSI.
Jan 22 04:46:50 np0005591760 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 04:46:50 np0005591760 systemd[1]: Starting Open-iSCSI...
Jan 22 04:46:51 np0005591760 systemd[1]: Started Open-iSCSI.
Jan 22 04:46:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:51.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:51 np0005591760 python3.9[233046]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:46:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:46:51 np0005591760 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 22 04:46:51 np0005591760 multipathd[230262]: exit (signal)
Jan 22 04:46:51 np0005591760 multipathd[230262]: --------shut down-------
Jan 22 04:46:51 np0005591760 systemd[1]: multipathd.service: Deactivated successfully.
Jan 22 04:46:51 np0005591760 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 22 04:46:51 np0005591760 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 04:46:51 np0005591760 multipathd[233052]: --------start up--------
Jan 22 04:46:51 np0005591760 multipathd[233052]: read /etc/multipath.conf
Jan 22 04:46:51 np0005591760 multipathd[233052]: path checkers start up
Jan 22 04:46:51 np0005591760 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 04:46:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:51.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:46:52 np0005591760 python3.9[233210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 04:46:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094652 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:46:53 np0005591760 python3.9[233367]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:46:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:53 np0005591760 python3.9[233519]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:46:53 np0005591760 systemd[1]: Reloading.
Jan 22 04:46:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:46:53 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:46:54 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:46:54 np0005591760 python3.9[233705]: ansible-ansible.builtin.service_facts Invoked
Jan 22 04:46:54 np0005591760 network[233723]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 04:46:54 np0005591760 network[233724]: 'network-scripts' will be removed from distribution in near future.
Jan 22 04:46:54 np0005591760 network[233725]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 04:46:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:46:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:46:55 np0005591760 podman[233757]: 2026-01-22 09:46:55.73336622 +0000 UTC m=+0.067798829 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 04:46:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:55.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:46:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:57.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:57.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:57.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:46:57.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:46:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:57.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:57 np0005591760 python3.9[234024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:57] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Jan 22 04:46:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:46:57] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Jan 22 04:46:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:57.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:46:58 np0005591760 python3.9[234177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:58 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Scheduled restart job, restart counter is at 3.
Jan 22 04:46:58 np0005591760 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:46:58 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.110s CPU time.
Jan 22 04:46:58 np0005591760 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:46:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:46:58 np0005591760 podman[234372]: 2026-01-22 09:46:58.407965607 +0000 UTC m=+0.027504866 container create bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:46:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e998b9f6b9dcc2b8fa3ad99b12273bc5809f4260869226a24610ab0263771fc1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e998b9f6b9dcc2b8fa3ad99b12273bc5809f4260869226a24610ab0263771fc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e998b9f6b9dcc2b8fa3ad99b12273bc5809f4260869226a24610ab0263771fc1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e998b9f6b9dcc2b8fa3ad99b12273bc5809f4260869226a24610ab0263771fc1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ylzmiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:46:58 np0005591760 podman[234372]: 2026-01-22 09:46:58.452300696 +0000 UTC m=+0.071839965 container init bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:46:58 np0005591760 podman[234372]: 2026-01-22 09:46:58.45682892 +0000 UTC m=+0.076368179 container start bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:46:58 np0005591760 bash[234372]: bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1
Jan 22 04:46:58 np0005591760 podman[234372]: 2026-01-22 09:46:58.39704918 +0000 UTC m=+0.016588459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 22 04:46:58 np0005591760 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 22 04:46:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:46:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:46:58 np0005591760 python3.9[234354]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:59 np0005591760 python3.9[234578]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:46:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:46:59 np0005591760 python3.9[234731]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:46:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:46:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:46:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:46:59.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:46:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:47:00 np0005591760 python3.9[234885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:47:00 np0005591760 python3.9[235038]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:47:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:01 np0005591760 python3.9[235192]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:47:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:01 np0005591760 python3.9[235345]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:47:02 np0005591760 python3.9[235498]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:02 np0005591760 python3.9[235651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:03 np0005591760 python3.9[235803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:03 np0005591760 python3.9[235955]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:03.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:47:04 np0005591760 python3.9[236108]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:04 np0005591760 python3.9[236260]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:47:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:47:04 np0005591760 python3.9[236413]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:05 np0005591760 python3.9[236565]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:05 np0005591760 python3.9[236717]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:47:06 np0005591760 python3.9[236870]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:06 np0005591760 python3.9[237023]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:07.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:07.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:07.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:07.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:07 np0005591760 python3.9[237175]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:07] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:47:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:07] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:47:07 np0005591760 python3.9[237327]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:47:08 np0005591760 python3.9[237480]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:08 np0005591760 python3.9[237632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:09 np0005591760 python3.9[237785]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:09 np0005591760 python3.9[237937]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 04:47:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:47:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:47:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:47:10 np0005591760 python3.9[238090]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:47:10 np0005591760 systemd[1]: Reloading.
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:47:10 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:47:10 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:47:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:11 np0005591760 python3.9[238319]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20126ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:11 np0005591760 python3.9[238472]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094711 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:47:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:11.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 22 04:47:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c002500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:12 np0005591760 python3.9[238626]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:12 np0005591760 python3.9[238779]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094712 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:47:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:12 np0005591760 python3.9[238933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:13.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:13 np0005591760 python3.9[239086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:13 np0005591760 python3.9[239239]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:47:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20126ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:14 np0005591760 python3.9[239393]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 04:47:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:15 np0005591760 python3.9[239547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:47:16 np0005591760 python3.9[239699]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:16 np0005591760 python3.9[239852]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:16 np0005591760 python3.9[240005]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:17.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:17.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:17.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:17.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:17.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:17 np0005591760 python3.9[240157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:17] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:47:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:17] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 22 04:47:17 np0005591760 python3.9[240309]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:47:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:18 np0005591760 python3.9[240462]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:18 np0005591760 python3.9[240614]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:18 np0005591760 podman[240616]: 2026-01-22 09:47:18.734548649 +0000 UTC m=+0.039887549 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 04:47:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:19 np0005591760 python3.9[240784]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:47:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:19 np0005591760 python3.9[240936]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:47:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094720 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:47:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800a8b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:21.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800a8b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:47:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:47:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Jan 22 04:47:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:23.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800b740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:47:24 np0005591760 python3.9[241093]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 22 04:47:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800b740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:24 np0005591760 python3.9[241246]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 04:47:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:25.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:25 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:25 np0005591760 python3.9[241405]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 04:47:25 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:47:25 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:47:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:47:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:25.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:47:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:47:26 np0005591760 podman[241440]: 2026-01-22 09:47:26.063287526 +0000 UTC m=+0.055697270 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 04:47:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800b740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:26 np0005591760 systemd-logind[747]: New session 54 of user zuul.
Jan 22 04:47:26 np0005591760 systemd[1]: Started Session 54 of User zuul.
Jan 22 04:47:26 np0005591760 systemd[1]: session-54.scope: Deactivated successfully.
Jan 22 04:47:26 np0005591760 systemd-logind[747]: Session 54 logged out. Waiting for processes to exit.
Jan 22 04:47:26 np0005591760 systemd-logind[747]: Removed session 54.
Jan 22 04:47:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800b740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:27.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:27 np0005591760 python3.9[241617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:27.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:27.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:27.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:27.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:27 np0005591760 python3.9[241738]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075246.6833167-2654-143728930578260/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:27] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 22 04:47:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:27] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 22 04:47:27 np0005591760 python3.9[241888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:47:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:27.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:47:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:28 np0005591760 python3.9[241965]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:28 np0005591760 python3.9[242115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800c9e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:28 np0005591760 python3.9[242237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075248.2269595-2654-245371119806534/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:29.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:29 np0005591760 python3.9[242387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:29 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800c9e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:29 np0005591760 python3.9[242508]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075249.0282354-2654-181602817117584/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:29.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:47:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:30 np0005591760 python3.9[242659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:30 np0005591760 python3.9[242781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075250.035485-2654-101575245384910/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:47:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:47:31 np0005591760 python3.9[242956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:31.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:31 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:31 np0005591760 python3.9[243077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075250.9412146-2654-64161917970222/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:47:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:31.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:47:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:47:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:32 np0005591760 python3.9[243230]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:32 np0005591760 python3.9[243383]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:33.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:33 np0005591760 python3.9[243535]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:47:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094733 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:47:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:47:33 np0005591760 python3.9[243749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:33.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:47:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:34 np0005591760 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 22 04:47:34 np0005591760 python3.9[243890]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769075253.544434-2975-249879242368096/.source _original_basename=.4yte42sp follow=False checksum=6cc66f63d898ed6b0f08f9d308cc384f37894048 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 22 04:47:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:34 np0005591760 python3.9[244044]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 04:47:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:35 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:35 np0005591760 python3.9[244197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 973 B/s wr, 3 op/s
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:47:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:35 np0005591760 python3.9[244368]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075255.1744797-3053-87488681093934/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=53b8456782b81b5794d3eef3fadcfb00db1088a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:35.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:35 np0005591760 podman[244413]: 2026-01-22 09:47:35.97126481 +0000 UTC m=+0.029949443 container create 2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:47:36 np0005591760 systemd[1]: Started libpod-conmon-2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5.scope.
Jan 22 04:47:36 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:36 np0005591760 podman[244413]: 2026-01-22 09:47:36.032814167 +0000 UTC m=+0.091498800 container init 2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euclid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:47:36 np0005591760 podman[244413]: 2026-01-22 09:47:36.037697278 +0000 UTC m=+0.096381921 container start 2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:47:36 np0005591760 podman[244413]: 2026-01-22 09:47:36.038689479 +0000 UTC m=+0.097374132 container attach 2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:47:36 np0005591760 sleepy_euclid[244437]: 167 167
Jan 22 04:47:36 np0005591760 systemd[1]: libpod-2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5.scope: Deactivated successfully.
Jan 22 04:47:36 np0005591760 conmon[244437]: conmon 2fd4c03d0563344368d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5.scope/container/memory.events
Jan 22 04:47:36 np0005591760 podman[244413]: 2026-01-22 09:47:36.042142833 +0000 UTC m=+0.100827466 container died 2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euclid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:47:36 np0005591760 systemd[1]: var-lib-containers-storage-overlay-84f2c6f0b7a97ca55d107dc7d8de0e37cf5d9d2fcb6903fb9324c9fc39bf8408-merged.mount: Deactivated successfully.
Jan 22 04:47:36 np0005591760 podman[244413]: 2026-01-22 09:47:35.959279443 +0000 UTC m=+0.017964096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:47:36 np0005591760 podman[244413]: 2026-01-22 09:47:36.060226703 +0000 UTC m=+0.118911336 container remove 2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euclid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 04:47:36 np0005591760 systemd[1]: libpod-conmon-2fd4c03d0563344368d134a0470cb53fa5f692838e3e35825aa32b8b740b8bd5.scope: Deactivated successfully.
Jan 22 04:47:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.184083758 +0000 UTC m=+0.030493570 container create f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wiles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 04:47:36 np0005591760 systemd[1]: Started libpod-conmon-f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f.scope.
Jan 22 04:47:36 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea8cb4e3f915396417f25587aa07fc1eaf8cb2984ff814994e22fd083ca0e52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea8cb4e3f915396417f25587aa07fc1eaf8cb2984ff814994e22fd083ca0e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea8cb4e3f915396417f25587aa07fc1eaf8cb2984ff814994e22fd083ca0e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea8cb4e3f915396417f25587aa07fc1eaf8cb2984ff814994e22fd083ca0e52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea8cb4e3f915396417f25587aa07fc1eaf8cb2984ff814994e22fd083ca0e52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.237234102 +0000 UTC m=+0.083643924 container init f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.242774232 +0000 UTC m=+0.089184044 container start f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wiles, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.244180826 +0000 UTC m=+0.090590637 container attach f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.172060388 +0000 UTC m=+0.018470220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:47:36 np0005591760 python3.9[244603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 04:47:36 np0005591760 ecstatic_wiles[244548]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:47:36 np0005591760 ecstatic_wiles[244548]: --> All data devices are unavailable
Jan 22 04:47:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 22 04:47:36 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:47:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:36 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:47:36 np0005591760 systemd[1]: libpod-f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f.scope: Deactivated successfully.
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.514479643 +0000 UTC m=+0.360889455 container died f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:47:36 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7ea8cb4e3f915396417f25587aa07fc1eaf8cb2984ff814994e22fd083ca0e52-merged.mount: Deactivated successfully.
Jan 22 04:47:36 np0005591760 podman[244512]: 2026-01-22 09:47:36.536141212 +0000 UTC m=+0.382551024 container remove f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wiles, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 04:47:36 np0005591760 systemd[1]: libpod-conmon-f1e3109c854b58e07febceaa0da5cf145b51a8e43e619b9740f902b69959668f.scope: Deactivated successfully.
Jan 22 04:47:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:36 np0005591760 python3.9[244795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769075256.0847535-3098-134223103918247/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 04:47:36 np0005591760 podman[244850]: 2026-01-22 09:47:36.963053571 +0000 UTC m=+0.031312113 container create dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bhabha, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:47:36 np0005591760 systemd[1]: Started libpod-conmon-dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941.scope.
Jan 22 04:47:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:37.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:37 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:37.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:37.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:37.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:37 np0005591760 podman[244850]: 2026-01-22 09:47:37.01632301 +0000 UTC m=+0.084581562 container init dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bhabha, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:47:37 np0005591760 podman[244850]: 2026-01-22 09:47:37.023002529 +0000 UTC m=+0.091261071 container start dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 04:47:37 np0005591760 podman[244850]: 2026-01-22 09:47:37.02416942 +0000 UTC m=+0.092427972 container attach dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:47:37 np0005591760 heuristic_bhabha[244863]: 167 167
Jan 22 04:47:37 np0005591760 systemd[1]: libpod-dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941.scope: Deactivated successfully.
Jan 22 04:47:37 np0005591760 podman[244850]: 2026-01-22 09:47:37.027230624 +0000 UTC m=+0.095489167 container died dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bhabha, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:47:37 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b54fbae8ef9bd8ddc9ad4a53fcfa6bd61d9ea8931d14f87ed3031d3f6d2e42bb-merged.mount: Deactivated successfully.
Jan 22 04:47:37 np0005591760 podman[244850]: 2026-01-22 09:47:36.952078038 +0000 UTC m=+0.020336601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:47:37 np0005591760 podman[244850]: 2026-01-22 09:47:37.047436417 +0000 UTC m=+0.115694959 container remove dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bhabha, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:47:37 np0005591760 systemd[1]: libpod-conmon-dfeb93827f77f315ae29588afa41e79bfe064b0e6cb65c7fa8de408aaaebb941.scope: Deactivated successfully.
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.170541632 +0000 UTC m=+0.028983400 container create 38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:47:37 np0005591760 systemd[1]: Started libpod-conmon-38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770.scope.
Jan 22 04:47:37 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4625df8327ba2213fd3c3eddf9210f1d056936a572e0110e68450de0064d58e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4625df8327ba2213fd3c3eddf9210f1d056936a572e0110e68450de0064d58e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4625df8327ba2213fd3c3eddf9210f1d056936a572e0110e68450de0064d58e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:37 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4625df8327ba2213fd3c3eddf9210f1d056936a572e0110e68450de0064d58e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.222429255 +0000 UTC m=+0.080871032 container init 38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.228291133 +0000 UTC m=+0.086732899 container start 38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_jackson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.229499311 +0000 UTC m=+0.087941078 container attach 38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_jackson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.158283282 +0000 UTC m=+0.016725069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:47:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:37.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:37 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:37 np0005591760 practical_jackson[244934]: {
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:    "0": [
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:        {
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "devices": [
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "/dev/loop3"
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            ],
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "lv_name": "ceph_lv0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "lv_size": "21470642176",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "name": "ceph_lv0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "tags": {
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.cluster_name": "ceph",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.crush_device_class": "",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.encrypted": "0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.osd_id": "0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.type": "block",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.vdo": "0",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:                "ceph.with_tpm": "0"
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            },
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "type": "block",
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:            "vg_name": "ceph_vg0"
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:        }
Jan 22 04:47:37 np0005591760 practical_jackson[244934]:    ]
Jan 22 04:47:37 np0005591760 practical_jackson[244934]: }
Jan 22 04:47:37 np0005591760 systemd[1]: libpod-38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770.scope: Deactivated successfully.
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.472087811 +0000 UTC m=+0.330529579 container died 38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_jackson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 22 04:47:37 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d4625df8327ba2213fd3c3eddf9210f1d056936a572e0110e68450de0064d58e-merged.mount: Deactivated successfully.
Jan 22 04:47:37 np0005591760 podman[244885]: 2026-01-22 09:47:37.497260082 +0000 UTC m=+0.355701849 container remove 38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:47:37 np0005591760 ceph-mon[74254]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 22 04:47:37 np0005591760 systemd[1]: libpod-conmon-38f30e97de0e24a5614c55ed61fa52f34aeb2807fa44ced8423a33bfcd348770.scope: Deactivated successfully.
Jan 22 04:47:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 884 B/s wr, 2 op/s
Jan 22 04:47:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:37] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 22 04:47:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:37] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 22 04:47:37 np0005591760 python3.9[245035]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.92045562 +0000 UTC m=+0.027603467 container create 293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brattain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:47:37 np0005591760 systemd[1]: Started libpod-conmon-293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014.scope.
Jan 22 04:47:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:37.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:37 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.966840323 +0000 UTC m=+0.073988180 container init 293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brattain, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.971831206 +0000 UTC m=+0.078979063 container start 293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.972948214 +0000 UTC m=+0.080096060 container attach 293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brattain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:47:37 np0005591760 interesting_brattain[245165]: 167 167
Jan 22 04:47:37 np0005591760 systemd[1]: libpod-293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014.scope: Deactivated successfully.
Jan 22 04:47:37 np0005591760 conmon[245165]: conmon 293c62a43e9a3842fc11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014.scope/container/memory.events
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.975903327 +0000 UTC m=+0.083051175 container died 293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:47:37 np0005591760 systemd[1]: var-lib-containers-storage-overlay-89f155f0a3bcbe2a54e07db9bfd548a5131e85b6545535ea1b5381c8c8c4188a-merged.mount: Deactivated successfully.
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.997307821 +0000 UTC m=+0.104455669 container remove 293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:47:37 np0005591760 podman[245149]: 2026-01-22 09:47:37.90933797 +0000 UTC m=+0.016485837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:47:38 np0005591760 systemd[1]: libpod-conmon-293c62a43e9a3842fc11ed4022e2cf96bde57c4d523e7ade3e35fc3f8cd2a014.scope: Deactivated successfully.
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.120391757 +0000 UTC m=+0.029889059 container create 0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:47:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:38 np0005591760 systemd[1]: Started libpod-conmon-0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d.scope.
Jan 22 04:47:38 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6ea1720b7d3a1d2d4fbc8afd7672f689a44c9af07e58514bd5f2eee317cda4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6ea1720b7d3a1d2d4fbc8afd7672f689a44c9af07e58514bd5f2eee317cda4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6ea1720b7d3a1d2d4fbc8afd7672f689a44c9af07e58514bd5f2eee317cda4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:38 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6ea1720b7d3a1d2d4fbc8afd7672f689a44c9af07e58514bd5f2eee317cda4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.170772247 +0000 UTC m=+0.080269559 container init 0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.176305936 +0000 UTC m=+0.085803238 container start 0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.178956675 +0000 UTC m=+0.088453987 container attach 0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.108854114 +0000 UTC m=+0.018351447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:47:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:38 np0005591760 python3.9[245330]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 04:47:38 np0005591760 lvm[245404]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:47:38 np0005591760 hopeful_snyder[245250]: {}
Jan 22 04:47:38 np0005591760 lvm[245404]: VG ceph_vg0 finished
Jan 22 04:47:38 np0005591760 systemd[1]: libpod-0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d.scope: Deactivated successfully.
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.702941235 +0000 UTC m=+0.612438537 container died 0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_snyder, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:47:38 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1e6ea1720b7d3a1d2d4fbc8afd7672f689a44c9af07e58514bd5f2eee317cda4-merged.mount: Deactivated successfully.
Jan 22 04:47:38 np0005591760 podman[245237]: 2026-01-22 09:47:38.73961524 +0000 UTC m=+0.649112543 container remove 0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_snyder, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 04:47:38 np0005591760 systemd[1]: libpod-conmon-0305e99a68eec6522af4ba7972625025ab893a09e2b168beb9431c26622b206d.scope: Deactivated successfully.
Jan 22 04:47:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:47:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:47:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:39 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 884 B/s wr, 2 op/s
Jan 22 04:47:39 np0005591760 python3[245591]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 04:47:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:47:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:47:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094740 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:47:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:41 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 973 B/s wr, 3 op/s
Jan 22 04:47:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:42 np0005591760 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 04:47:42 np0005591760 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 22 04:47:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:47:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:43 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 442 B/s wr, 1 op/s
Jan 22 04:47:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:43.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 973 B/s wr, 3 op/s
Jan 22 04:47:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:47:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:47:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:45.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:47.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:47.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:47.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:47.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:47:47.302 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:47:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:47:47.302 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:47:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:47:47.302 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:47:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:47 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:47:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:47] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 22 04:47:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:47] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 22 04:47:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:47.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:47:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:47:49
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:47:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 22 04:47:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:47:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:49.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:47:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:49 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:47:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:47:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:47:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:47:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:50 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:47:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:51 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:47:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:51.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:52 np0005591760 podman[245664]: 2026-01-22 09:47:52.856287093 +0000 UTC m=+3.859877003 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 04:47:52 np0005591760 podman[245602]: 2026-01-22 09:47:52.879068173 +0000 UTC m=+13.216967164 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 22 04:47:52 np0005591760 podman[245725]: 2026-01-22 09:47:52.972618342 +0000 UTC m=+0.029777177 container create 0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:47:52 np0005591760 podman[245725]: 2026-01-22 09:47:52.959196265 +0000 UTC m=+0.016355090 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 22 04:47:52 np0005591760 python3[245591]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 22 04:47:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:53.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:53 np0005591760 python3.9[245905]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:47:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:47:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094753 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:47:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:53.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:54 np0005591760 python3.9[246060]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 22 04:47:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:54 np0005591760 python3.9[246213]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 04:47:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:47:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:47:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:47:55 np0005591760 python3[246365]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 04:47:55 np0005591760 podman[246394]: 2026-01-22 09:47:55.936484672 +0000 UTC m=+0.029448547 container create 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Jan 22 04:47:55 np0005591760 podman[246394]: 2026-01-22 09:47:55.923220994 +0000 UTC m=+0.016184890 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 22 04:47:55 np0005591760 python3[246365]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Jan 22 04:47:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:55.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:56 np0005591760 podman[246545]: 2026-01-22 09:47:56.425523948 +0000 UTC m=+0.061518960 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 04:47:56 np0005591760 python3.9[246589]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:47:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:57.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:47:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:47:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:57 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:47:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:57] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:47:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:47:57] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:47:57 np0005591760 python3.9[246751]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:47:58 np0005591760 python3.9[246903]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769075277.9477901-3386-38738756655221/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 04:47:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:58 np0005591760 python3.9[246979]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 04:47:58 np0005591760 systemd[1]: Reloading.
Jan 22 04:47:58 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:47:58 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:47:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:47:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:47:59 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:47:59 np0005591760 python3.9[247092]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 04:47:59 np0005591760 systemd[1]: Reloading.
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:47:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 22 04:47:59 np0005591760 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 04:47:59 np0005591760 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 04:47:59 np0005591760 systemd[1]: Starting nova_compute container...
Jan 22 04:47:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:47:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:59 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 04:47:59 np0005591760 podman[247132]: 2026-01-22 09:47:59.887428177 +0000 UTC m=+0.076589277 container init 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute)
Jan 22 04:47:59 np0005591760 podman[247132]: 2026-01-22 09:47:59.892737161 +0000 UTC m=+0.081898261 container start 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute)
Jan 22 04:47:59 np0005591760 podman[247132]: nova_compute
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + sudo -E kolla_set_configs
Jan 22 04:47:59 np0005591760 systemd[1]: Started nova_compute container.
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Validating config file
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying service configuration files
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Deleting /etc/ceph
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Creating directory /etc/ceph
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Writing out command to execute
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:47:59 np0005591760 nova_compute[247144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 04:47:59 np0005591760 nova_compute[247144]: ++ cat /run_command
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + CMD=nova-compute
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + ARGS=
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + sudo kolla_copy_cacerts
Jan 22 04:47:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:47:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:47:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:47:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:47:59 np0005591760 nova_compute[247144]: Running command: 'nova-compute'
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + [[ ! -n '' ]]
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + . kolla_extend_start
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + umask 0022
Jan 22 04:47:59 np0005591760 nova_compute[247144]: + exec nova-compute
Jan 22 04:48:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:00 np0005591760 python3.9[247306]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:48:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c0055b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:01 np0005591760 python3.9[247458]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:48:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:01 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.828 247148 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.829 247148 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.829 247148 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.829 247148 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.944 247148 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:48:01 np0005591760 python3.9[247610]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.956 247148 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:48:01 np0005591760 nova_compute[247144]: 2026-01-22 09:48:01.957 247148 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 04:48:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:48:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:01.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:48:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.488 247148 INFO nova.virt.driver [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.604 247148 INFO nova.compute.provider_config [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.611 247148 DEBUG oslo_concurrency.lockutils [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.611 247148 DEBUG oslo_concurrency.lockutils [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.612 247148 DEBUG oslo_concurrency.lockutils [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.612 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.612 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.613 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.613 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.613 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.613 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.613 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.613 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.614 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.614 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.614 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.614 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.614 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.615 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.615 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.615 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.615 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.615 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.615 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.616 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.616 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.616 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.616 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.616 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.617 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.617 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.617 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.617 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.617 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.617 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.618 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.618 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.618 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.618 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.618 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.619 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.619 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.619 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.619 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.619 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.620 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.620 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.620 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.620 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.620 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.620 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.621 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.621 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.621 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.621 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.621 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.622 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.622 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.622 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.622 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.622 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.622 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.623 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.623 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.623 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.623 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.623 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.624 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.624 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.624 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.624 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.624 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.624 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.625 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.625 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.625 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.625 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.625 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.625 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.626 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.626 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.626 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.626 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.626 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.626 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.627 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.627 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.627 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.627 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.627 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.628 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.628 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.628 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.628 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.628 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.628 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.629 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.629 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.629 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.629 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.629 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.630 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.630 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.630 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.630 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.630 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.630 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.631 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.631 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.631 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.631 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.631 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.632 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.632 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.632 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.632 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.632 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.632 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.633 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.633 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.633 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.633 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.633 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.634 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.634 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.634 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.634 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.634 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.634 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.635 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.635 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.635 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.635 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.635 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.635 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.636 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.636 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.636 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.636 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.636 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.636 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.637 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.637 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.637 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.637 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.637 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.638 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.638 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.638 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.638 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.638 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.638 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.639 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.639 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.639 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.639 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.639 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.640 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.640 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.641 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.641 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.641 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.641 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.641 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.642 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.642 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.642 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.642 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.642 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.642 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.643 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.643 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.643 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.643 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.643 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.644 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.644 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.644 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.644 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.644 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.644 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.645 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.645 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.645 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.645 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.645 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.646 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.646 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.646 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.646 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.646 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.646 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.647 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.647 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.647 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.647 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.647 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.648 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.648 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.648 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.648 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.648 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.648 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.649 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.649 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.649 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.649 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.649 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.650 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.650 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.650 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.650 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.650 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.650 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.651 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.651 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.651 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.651 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.651 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.652 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.652 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.652 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.652 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.652 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.652 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.653 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.653 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.653 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.653 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.653 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.654 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.654 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.654 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.654 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.654 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.654 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.655 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.655 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.655 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.655 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.655 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.655 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.656 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.656 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.656 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.656 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.656 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.657 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.657 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.657 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.657 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.657 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.657 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.658 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.658 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.658 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.658 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.658 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.659 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.659 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.659 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.659 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.659 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.659 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.660 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.660 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.660 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.660 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.660 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.661 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.661 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.661 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.661 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.661 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.661 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.662 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.662 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.662 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.662 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.662 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.663 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.663 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.663 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.663 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.663 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.663 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.664 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.664 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.664 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.664 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.664 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.665 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.665 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.665 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.665 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.665 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.665 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.666 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.666 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.666 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.666 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.666 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.666 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.667 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.667 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.667 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.667 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.667 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.668 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.668 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.668 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.668 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.668 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.669 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.669 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.669 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.669 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.669 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.669 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.670 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.670 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.670 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.670 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.670 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.671 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.671 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.671 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.671 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.671 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.671 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.672 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.672 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.672 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.672 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.672 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.673 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.673 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.673 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.673 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.673 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.673 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.674 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.674 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.674 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.674 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.674 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.675 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.675 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.675 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.675 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.675 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.675 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.676 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.676 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.676 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.676 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.676 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.677 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.677 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.677 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.677 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.677 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.678 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.678 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.678 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.678 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.678 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.679 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.679 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.679 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.679 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.679 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.679 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.680 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.680 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.680 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.680 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.680 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.681 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.681 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.681 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.681 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.681 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.681 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.682 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.682 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.682 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.682 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.682 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.682 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.683 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.683 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.683 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.683 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.683 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.684 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.684 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.684 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.684 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.684 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.685 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.685 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.685 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.685 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.685 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.685 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.686 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.686 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.686 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.686 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.686 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.686 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.687 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.687 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.687 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.687 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.687 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.688 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.688 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.688 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.688 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.688 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.688 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.689 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.689 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.689 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.689 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.689 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.689 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.690 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.690 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.690 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.690 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.690 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.691 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.691 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.691 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.691 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.691 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.691 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.692 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.692 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.692 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.692 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.692 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.693 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.693 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.693 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.693 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.693 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.693 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.694 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.694 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.694 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.694 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.694 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.694 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.695 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.695 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.695 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.695 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.696 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.696 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.696 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.696 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.696 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.696 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.697 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.697 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.697 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.697 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.697 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.698 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.698 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.698 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.698 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.698 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.698 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.699 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.699 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.699 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.699 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.699 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.700 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.700 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.700 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.700 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.701 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.701 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.701 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.701 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.701 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.701 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.702 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.702 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.702 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.702 247148 WARNING oslo_config.cfg [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 04:48:02 np0005591760 nova_compute[247144]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 04:48:02 np0005591760 nova_compute[247144]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 04:48:02 np0005591760 nova_compute[247144]: and ``live_migration_inbound_addr`` respectively.
Jan 22 04:48:02 np0005591760 nova_compute[247144]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.702 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.703 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.703 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.703 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.703 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.703 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.704 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.704 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.704 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.704 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.704 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.705 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.705 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.705 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.705 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.705 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.705 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.706 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.706 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rbd_secret_uuid        = 43df7a30-cf5f-5209-adfd-bf44298b19f2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.706 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.706 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.706 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.707 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.707 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.707 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.707 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.707 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.707 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.708 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.708 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.708 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.708 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.708 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.709 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.709 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.709 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.709 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.709 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.710 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.710 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.710 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.710 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.710 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.710 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.711 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.711 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.711 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.711 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.711 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.712 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.712 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.712 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.712 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.712 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.712 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.713 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.713 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.713 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.713 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.713 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.713 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.714 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.714 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.714 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.714 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.714 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.715 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.715 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.715 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.716 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.717 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.718 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.718 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.718 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.718 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.718 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.718 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.719 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.720 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.721 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.722 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.722 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.722 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.722 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.722 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.722 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.723 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.723 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.723 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.723 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.723 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.724 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.725 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.725 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.725 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.725 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.725 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.725 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.726 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.726 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.726 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.726 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.726 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.726 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.727 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.728 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.728 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.728 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.728 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.728 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.728 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.729 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.730 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.731 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.732 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.732 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.732 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.732 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.732 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.732 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.733 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.734 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.735 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.735 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.735 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.735 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.735 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.735 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.736 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.737 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.738 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.739 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.740 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.741 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.742 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.742 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.742 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.742 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.742 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.742 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.743 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.743 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.743 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.743 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.743 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.743 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.744 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.745 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.746 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.747 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.748 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.749 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.750 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.751 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.752 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.753 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.754 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.755 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.755 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.755 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.755 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.755 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.755 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.756 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.757 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.758 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.759 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.760 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.761 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.762 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.763 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.764 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.765 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.766 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.767 247148 DEBUG oslo_service.service [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.768 247148 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.777 247148 DEBUG nova.virt.libvirt.host [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.778 247148 DEBUG nova.virt.libvirt.host [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.778 247148 DEBUG nova.virt.libvirt.host [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.778 247148 DEBUG nova.virt.libvirt.host [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 04:48:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:02 np0005591760 python3.9[247765]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 04:48:02 np0005591760 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 04:48:02 np0005591760 systemd[1]: Started libvirt QEMU daemon.
Jan 22 04:48:02 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.839 247148 DEBUG nova.virt.libvirt.host [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc37adc5ac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.841 247148 DEBUG nova.virt.libvirt.host [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc37adc5ac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.842 247148 INFO nova.virt.libvirt.driver [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.854 247148 WARNING nova.virt.libvirt.driver [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 22 04:48:02 np0005591760 nova_compute[247144]: 2026-01-22 09:48:02.855 247148 DEBUG nova.virt.libvirt.volume.mount [None req-321b4dd2-2f44-48ce-b674-f72e1d485236 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 04:48:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:03 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c0055d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:03 np0005591760 python3.9[247988]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 04:48:03 np0005591760 systemd[1]: Stopping nova_compute container...
Jan 22 04:48:03 np0005591760 nova_compute[247144]: 2026-01-22 09:48:03.563 247148 DEBUG oslo_concurrency.lockutils [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:48:03 np0005591760 nova_compute[247144]: 2026-01-22 09:48:03.564 247148 DEBUG oslo_concurrency.lockutils [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:48:03 np0005591760 nova_compute[247144]: 2026-01-22 09:48:03.564 247148 DEBUG oslo_concurrency.lockutils [None req-951ca10c-b4c8-491f-9bd1-f8f3201b9fce - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:48:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:48:03 np0005591760 virtqemud[247788]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 04:48:03 np0005591760 virtqemud[247788]: hostname: compute-0
Jan 22 04:48:03 np0005591760 virtqemud[247788]: End of file while reading data: Input/output error
Jan 22 04:48:03 np0005591760 systemd[1]: libpod-025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1.scope: Deactivated successfully.
Jan 22 04:48:03 np0005591760 systemd[1]: libpod-025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1.scope: Consumed 2.472s CPU time.
Jan 22 04:48:03 np0005591760 podman[248000]: 2026-01-22 09:48:03.868109634 +0000 UTC m=+0.330152498 container died 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:48:03 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22-merged.mount: Deactivated successfully.
Jan 22 04:48:03 np0005591760 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1-userdata-shm.mount: Deactivated successfully.
Jan 22 04:48:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:04 np0005591760 podman[248000]: 2026-01-22 09:48:04.287247929 +0000 UTC m=+0.749290772 container cleanup 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible)
Jan 22 04:48:04 np0005591760 podman[248000]: nova_compute
Jan 22 04:48:04 np0005591760 podman[248024]: nova_compute
Jan 22 04:48:04 np0005591760 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 22 04:48:04 np0005591760 systemd[1]: Stopped nova_compute container.
Jan 22 04:48:04 np0005591760 systemd[1]: Starting nova_compute container...
Jan 22 04:48:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b4adcc6da78f61b50e82765153a6ec12b5262643480d0eb3df193f83427b22/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:04 np0005591760 podman[248033]: 2026-01-22 09:48:04.422847963 +0000 UTC m=+0.074403794 container init 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute)
Jan 22 04:48:04 np0005591760 podman[248033]: 2026-01-22 09:48:04.42741632 +0000 UTC m=+0.078972152 container start 025af4c5c56cf12ab324c298ffe000e7c9221deae1bf29c0f94a981396ec69b1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202)
Jan 22 04:48:04 np0005591760 podman[248033]: nova_compute
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + sudo -E kolla_set_configs
Jan 22 04:48:04 np0005591760 systemd[1]: Started nova_compute container.
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Validating config file
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying service configuration files
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /etc/ceph
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Creating directory /etc/ceph
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Writing out command to execute
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:48:04 np0005591760 nova_compute[248045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 04:48:04 np0005591760 nova_compute[248045]: ++ cat /run_command
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + CMD=nova-compute
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + ARGS=
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + sudo kolla_copy_cacerts
Jan 22 04:48:04 np0005591760 nova_compute[248045]: Running command: 'nova-compute'
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + [[ ! -n '' ]]
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + . kolla_extend_start
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + umask 0022
Jan 22 04:48:04 np0005591760 nova_compute[248045]: + exec nova-compute
Jan 22 04:48:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:05 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:05 np0005591760 python3.9[248210]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 04:48:05 np0005591760 systemd[1]: Started libpod-conmon-0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651.scope.
Jan 22 04:48:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 22 04:48:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a0d4080c6756e49c8008aca3c694810a45dbc16e3c287c191d3c7ae609736/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a0d4080c6756e49c8008aca3c694810a45dbc16e3c287c191d3c7ae609736/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a0d4080c6756e49c8008aca3c694810a45dbc16e3c287c191d3c7ae609736/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:05 np0005591760 podman[248228]: 2026-01-22 09:48:05.601622064 +0000 UTC m=+0.078955330 container init 0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 22 04:48:05 np0005591760 podman[248228]: 2026-01-22 09:48:05.608390219 +0000 UTC m=+0.085723465 container start 0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init)
Jan 22 04:48:05 np0005591760 python3.9[248210]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Applying nova statedir ownership
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 22 04:48:05 np0005591760 nova_compute_init[248246]: INFO:nova_statedir:Nova statedir ownership complete
Jan 22 04:48:05 np0005591760 systemd[1]: libpod-0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651.scope: Deactivated successfully.
Jan 22 04:48:05 np0005591760 podman[248257]: 2026-01-22 09:48:05.690360626 +0000 UTC m=+0.024304556 container died 0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 04:48:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651-userdata-shm.mount: Deactivated successfully.
Jan 22 04:48:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6a6a0d4080c6756e49c8008aca3c694810a45dbc16e3c287c191d3c7ae609736-merged.mount: Deactivated successfully.
Jan 22 04:48:05 np0005591760 podman[248257]: 2026-01-22 09:48:05.725455814 +0000 UTC m=+0.059399734 container cleanup 0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:48:05 np0005591760 systemd[1]: libpod-conmon-0a135ee88e788aa415a76bf5b8584e55212bc2400a33f1d58b6c3e79fdcaa651.scope: Deactivated successfully.
Jan 22 04:48:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.003000033s ======
Jan 22 04:48:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000033s
Jan 22 04:48:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c0055f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.170 248049 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.170 248049 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.171 248049 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.171 248049 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 04:48:06 np0005591760 systemd[1]: session-53.scope: Deactivated successfully.
Jan 22 04:48:06 np0005591760 systemd[1]: session-53.scope: Consumed 1min 28.378s CPU time.
Jan 22 04:48:06 np0005591760 systemd-logind[747]: Session 53 logged out. Waiting for processes to exit.
Jan 22 04:48:06 np0005591760 systemd-logind[747]: Removed session 53.
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.280 248049 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.291 248049 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.291 248049 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.681 248049 INFO nova.virt.driver [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.764 248049 INFO nova.compute.provider_config [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.778 248049 DEBUG oslo_concurrency.lockutils [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.778 248049 DEBUG oslo_concurrency.lockutils [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.778 248049 DEBUG oslo_concurrency.lockutils [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.779 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.779 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.779 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.779 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.779 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.779 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.780 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.781 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.782 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.783 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.784 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.785 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.785 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.785 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.785 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.785 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.785 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.786 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.787 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.788 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.789 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.790 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.791 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.792 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.793 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.794 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.795 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.796 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.796 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.796 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.797 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.797 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.797 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.797 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.798 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.799 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.800 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.801 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.802 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.803 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.804 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.805 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.806 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.807 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.808 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.809 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.810 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.811 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.812 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.813 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.814 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.815 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.816 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.817 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.818 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.819 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.820 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.821 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.822 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.823 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.824 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.825 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.826 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.827 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.828 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.829 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.830 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.830 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.830 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.830 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.830 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.830 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.831 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.832 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.833 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.834 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.835 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.836 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.837 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.838 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.839 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.840 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.840 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.840 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.840 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.840 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.840 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.841 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.842 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.843 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.844 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.845 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.846 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.847 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.847 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.847 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.847 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.847 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.847 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.848 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.849 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.849 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.849 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.849 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.849 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.849 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.850 248049 WARNING oslo_config.cfg [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 04:48:06 np0005591760 nova_compute[248045]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 04:48:06 np0005591760 nova_compute[248045]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 04:48:06 np0005591760 nova_compute[248045]: and ``live_migration_inbound_addr`` respectively.
Jan 22 04:48:06 np0005591760 nova_compute[248045]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.850 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.850 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.850 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.850 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.850 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.851 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.852 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rbd_secret_uuid        = 43df7a30-cf5f-5209-adfd-bf44298b19f2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.853 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.854 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.854 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.854 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.854 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.854 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.854 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.855 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.856 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.857 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.858 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.859 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.860 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.861 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.862 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.863 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.863 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.863 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.863 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.863 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.863 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.864 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.865 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.866 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.867 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.868 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.869 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.870 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.870 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.870 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.870 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.870 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.870 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.871 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.872 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.873 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.874 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.875 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.876 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.877 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.878 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.878 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.878 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.878 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.878 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.878 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.879 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.880 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.881 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.882 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.883 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.884 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.885 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.886 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.886 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.886 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.886 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.886 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.886 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.887 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.888 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.889 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.890 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.891 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.892 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.892 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.892 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.892 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.892 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.892 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.893 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.894 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.895 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.896 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.897 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.898 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.899 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.900 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.901 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.902 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.903 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.904 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.905 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.906 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.907 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.908 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.909 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.910 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.910 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.910 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.910 248049 DEBUG oslo_service.service [None req-2c18ca77-6492-44f7-bbf3-aae29f35bc9f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.911 248049 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.944 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.945 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.945 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.945 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.954 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7faec9b755b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.955 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7faec9b755b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.956 248049 INFO nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.960 248049 INFO nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <host>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <uuid>85714d7e-be4c-4576-9a22-3776a24eda65</uuid>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <cpu>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <arch>x86_64</arch>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model>EPYC-Milan-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <vendor>AMD</vendor>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <microcode version='167776725'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <signature family='25' model='1' stepping='1'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <maxphysaddr mode='emulate' bits='48'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='x2apic'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='tsc-deadline'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='osxsave'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='hypervisor'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='tsc_adjust'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='ospke'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='vaes'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='vpclmulqdq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='spec-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='stibp'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='arch-capabilities'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='ssbd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='cmp_legacy'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='virt-ssbd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='lbrv'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='tsc-scale'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='vmcb-clean'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='pause-filter'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='pfthreshold'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='v-vmsave-vmload'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='vgif'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='rdctl-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='mds-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature name='pschange-mc-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <pages unit='KiB' size='4'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <pages unit='KiB' size='2048'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <pages unit='KiB' size='1048576'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </cpu>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <power_management>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <suspend_mem/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </power_management>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <iommu support='no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <migration_features>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <live/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <uri_transports>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <uri_transport>tcp</uri_transport>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <uri_transport>rdma</uri_transport>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </uri_transports>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </migration_features>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <topology>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <cells num='1'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <cell id='0'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          <memory unit='KiB'>7865364</memory>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          <pages unit='KiB' size='4'>1966341</pages>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          <distances>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:            <sibling id='0' value='10'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          </distances>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          <cpus num='4'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:          </cpus>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        </cell>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </cells>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </topology>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <cache>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </cache>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <secmodel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model>selinux</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <doi>0</doi>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </secmodel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <secmodel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model>dac</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <doi>0</doi>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </secmodel>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  </host>
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <guest>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <os_type>hvm</os_type>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <arch name='i686'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <wordsize>32</wordsize>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <domain type='qemu'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <domain type='kvm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </arch>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <features>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <pae/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <nonpae/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <acpi default='on' toggle='yes'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <apic default='on' toggle='no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <cpuselection/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <deviceboot/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <disksnapshot default='on' toggle='no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <externalSnapshot/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </features>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  </guest>
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <guest>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <os_type>hvm</os_type>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <arch name='x86_64'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <wordsize>64</wordsize>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <domain type='qemu'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <domain type='kvm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </arch>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <features>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <acpi default='on' toggle='yes'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <apic default='on' toggle='no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <cpuselection/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <deviceboot/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <disksnapshot default='on' toggle='no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <externalSnapshot/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </features>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  </guest>
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 
Jan 22 04:48:06 np0005591760 nova_compute[248045]: </capabilities>
Jan 22 04:48:06 np0005591760 nova_compute[248045]: #033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.967 248049 WARNING nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.967 248049 DEBUG nova.virt.libvirt.volume.mount [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.968 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 04:48:06 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.988 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 04:48:06 np0005591760 nova_compute[248045]: <domainCapabilities>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <domain>kvm</domain>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <arch>i686</arch>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <vcpu max='240'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <iothreads supported='yes'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <os supported='yes'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <enum name='firmware'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <loader supported='yes'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>rom</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>pflash</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <enum name='readonly'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>yes</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <enum name='secure'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </loader>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:  <cpu>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <mode name='host-passthrough' supported='yes'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <enum name='hostPassthroughMigratable'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <mode name='maximum' supported='yes'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <enum name='maximumMigratable'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <mode name='host-model' supported='yes'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <vendor>AMD</vendor>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <maxphysaddr mode='passthrough' limit='48'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='x2apic'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='hypervisor'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='vaes'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='stibp'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='ssbd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='overflow-recov'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='succor'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='lbrv'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-scale'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='flushbyasid'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='pause-filter'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='pfthreshold'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='v-vmsave-vmload'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='vgif'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:    <mode name='custom' supported='yes'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Broadwell'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-IBRS'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v1'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v3'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest-v1'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v1'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v2'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Denverton'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <blockers model='Denverton-v1'>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 04:48:06 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v6'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v7'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:07.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 04:48:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:07.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 04:48:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:07.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v5'>
Jan 22 04:48:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:07.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <memoryBacking supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='sourceType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>anonymous</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>memfd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </memoryBacking>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <disk supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='diskDevice'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>disk</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cdrom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>floppy</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>lun</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ide</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>fdc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>sata</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <graphics supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vnc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egl-headless</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <video supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='modelType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vga</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cirrus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>none</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>bochs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ramfb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hostdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='mode'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>subsystem</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='startupPolicy'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>mandatory</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>requisite</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>optional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='subsysType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pci</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='capsType'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='pciBackend'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hostdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <rng supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>random</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <filesystem supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='driverType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>path</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>handle</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtiofs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </filesystem>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tpm supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-tis</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-crb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emulator</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>external</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendVersion'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>2.0</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </tpm>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <redirdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </redirdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <channel supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </channel>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <crypto supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </crypto>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <interface supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>passt</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <panic supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>isa</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>hyperv</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </panic>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <console supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>null</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dev</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pipe</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stdio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>udp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tcp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu-vdagent</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <gic supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <vmcoreinfo supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <genid supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backingStoreInput supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backup supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <async-teardown supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <s390-pv supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <ps2 supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tdx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sev supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sgx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hyperv supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='features'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>relaxed</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vapic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>spinlocks</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vpindex</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>runtime</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>synic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stimer</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reset</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vendor_id</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>frequencies</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reenlightenment</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tlbflush</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ipi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>avic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emsr_bitmap</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>xmm_input</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <spinlocks>4095</spinlocks>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <stimer_direct>on</stimer_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hyperv>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <launchSecurity supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: </domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:06.993 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 04:48:07 np0005591760 nova_compute[248045]: <domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <domain>kvm</domain>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <arch>i686</arch>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <vcpu max='4096'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <iothreads supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <os supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='firmware'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <loader supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>rom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pflash</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='readonly'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>yes</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='secure'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </loader>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='host-passthrough' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='hostPassthroughMigratable'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='maximum' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='maximumMigratable'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='host-model' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <vendor>AMD</vendor>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <maxphysaddr mode='passthrough' limit='48'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='x2apic'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='hypervisor'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vaes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='stibp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='ssbd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='overflow-recov'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='succor'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='lbrv'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-scale'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='flushbyasid'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='pause-filter'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='pfthreshold'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='v-vmsave-vmload'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vgif'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='custom' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Denverton'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Denverton-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v6'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v7'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <memoryBacking supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='sourceType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>anonymous</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>memfd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </memoryBacking>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <disk supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='diskDevice'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>disk</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cdrom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>floppy</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>lun</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>fdc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>sata</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <graphics supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vnc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egl-headless</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <video supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='modelType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vga</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cirrus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>none</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>bochs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ramfb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hostdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='mode'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>subsystem</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='startupPolicy'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>mandatory</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>requisite</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>optional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='subsysType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pci</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='capsType'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='pciBackend'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hostdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <rng supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>random</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <filesystem supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='driverType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>path</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>handle</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtiofs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </filesystem>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tpm supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-tis</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-crb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emulator</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>external</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendVersion'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>2.0</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </tpm>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <redirdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </redirdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <channel supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </channel>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <crypto supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </crypto>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <interface supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>passt</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <panic supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>isa</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>hyperv</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </panic>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <console supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>null</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dev</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pipe</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stdio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>udp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tcp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu-vdagent</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <gic supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <vmcoreinfo supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <genid supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backingStoreInput supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backup supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <async-teardown supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <s390-pv supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <ps2 supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tdx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sev supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sgx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hyperv supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='features'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>relaxed</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vapic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>spinlocks</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vpindex</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>runtime</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>synic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stimer</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reset</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vendor_id</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>frequencies</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reenlightenment</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tlbflush</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ipi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>avic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emsr_bitmap</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>xmm_input</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <spinlocks>4095</spinlocks>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <stimer_direct>on</stimer_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hyperv>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <launchSecurity supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: </domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.025 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.028 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 04:48:07 np0005591760 nova_compute[248045]: <domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <domain>kvm</domain>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <arch>x86_64</arch>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <vcpu max='240'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <iothreads supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <os supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='firmware'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <loader supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>rom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pflash</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='readonly'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>yes</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='secure'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </loader>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='host-passthrough' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='hostPassthroughMigratable'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='maximum' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='maximumMigratable'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='host-model' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <vendor>AMD</vendor>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <maxphysaddr mode='passthrough' limit='48'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='x2apic'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='hypervisor'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vaes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='stibp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='ssbd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='overflow-recov'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='succor'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='lbrv'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-scale'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='flushbyasid'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='pause-filter'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='pfthreshold'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='v-vmsave-vmload'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vgif'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='custom' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Denverton'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Denverton-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v6'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v7'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <memoryBacking supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='sourceType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>anonymous</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>memfd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </memoryBacking>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <disk supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='diskDevice'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>disk</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cdrom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>floppy</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>lun</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ide</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>fdc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>sata</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <graphics supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vnc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egl-headless</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <video supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='modelType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vga</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cirrus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>none</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>bochs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ramfb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hostdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='mode'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>subsystem</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='startupPolicy'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>mandatory</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>requisite</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>optional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='subsysType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pci</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='capsType'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='pciBackend'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hostdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <rng supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>random</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <filesystem supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='driverType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>path</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>handle</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtiofs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </filesystem>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tpm supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-tis</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-crb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emulator</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>external</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendVersion'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>2.0</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </tpm>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <redirdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </redirdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <channel supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </channel>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <crypto supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </crypto>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <interface supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>passt</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <panic supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>isa</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>hyperv</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </panic>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <console supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>null</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dev</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pipe</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stdio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>udp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tcp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu-vdagent</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <gic supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <vmcoreinfo supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <genid supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backingStoreInput supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backup supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <async-teardown supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <s390-pv supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <ps2 supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tdx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sev supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sgx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hyperv supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='features'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>relaxed</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vapic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>spinlocks</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vpindex</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>runtime</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>synic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stimer</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reset</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vendor_id</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>frequencies</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reenlightenment</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tlbflush</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ipi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>avic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emsr_bitmap</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>xmm_input</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <spinlocks>4095</spinlocks>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <stimer_direct>on</stimer_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hyperv>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <launchSecurity supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: </domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.081 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 04:48:07 np0005591760 nova_compute[248045]: <domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <domain>kvm</domain>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <arch>x86_64</arch>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <vcpu max='4096'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <iothreads supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <os supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='firmware'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>efi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <loader supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>rom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pflash</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='readonly'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>yes</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='secure'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>yes</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>no</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </loader>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='host-passthrough' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='hostPassthroughMigratable'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='maximum' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='maximumMigratable'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>on</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>off</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='host-model' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <vendor>AMD</vendor>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <maxphysaddr mode='passthrough' limit='48'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='x2apic'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='hypervisor'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vaes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='stibp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='ssbd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='overflow-recov'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='succor'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='lbrv'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='tsc-scale'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='flushbyasid'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='pause-filter'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='pfthreshold'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='v-vmsave-vmload'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='vgif'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <mode name='custom' supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Broadwell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='ClearwaterForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ddpd-u'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sha512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm3'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sm4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Cooperlake-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Denverton'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Denverton-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Milan-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='EPYC-Turin-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amd-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='auto-ibrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vp2intersect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fs-gs-base-ns'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibpb-brtype'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='no-nested-data-bp'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='null-sel-clr-base'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='perfmon-v2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbpb'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='srso-user-kernel-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='stibp-always-on'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='AMD'>EPYC-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='GraniteRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-128'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-256'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx10-512'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='prefetchiti'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Haswell-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v6'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Icelake-Server-v7'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='KnightsMill-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4fmaps'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-4vnniw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512er'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512pf'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G4-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Opteron_G5-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fma4'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tbm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xop'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SapphireRapids-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='amx-tile'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-bf16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-fp16'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512-vpopcntdq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bitalg'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vbmi2'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrc'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fzrm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='la57'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='taa-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='tsx-ldtrk'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='xfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='SierraForest-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ifma'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-ne-convert'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx-vnni-int8'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bhi-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='bus-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cmpccxadd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fbsdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='fsrs'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ibrs-all'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='intel-psfd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ipred-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='lam'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mcdt-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='pbrsb-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='psdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rrsba-ctrl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='sbdr-ssdp-no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='serialize'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Client-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='hle'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='rtm'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Skylake-Server-v5'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512bw'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512cd'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512dq'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512f'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='avx512vl'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='mpx'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v2'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v3'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='core-capability'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='split-lock-detect'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='Snowridge-v4'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='cldemote'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='gfni'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdir64b'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='movdiri'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='athlon-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='core2duo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='coreduo-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='n270-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='ss'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <blockers model='phenom-v1'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnow'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <feature name='3dnowext'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </blockers>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </mode>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <memoryBacking supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <enum name='sourceType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>anonymous</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <value>memfd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </memoryBacking>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <disk supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='diskDevice'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>disk</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cdrom</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>floppy</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>lun</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>fdc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>sata</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <graphics supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vnc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egl-headless</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <video supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='modelType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vga</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>cirrus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>none</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>bochs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ramfb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hostdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='mode'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>subsystem</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='startupPolicy'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>mandatory</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>requisite</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>optional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='subsysType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pci</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>scsi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='capsType'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='pciBackend'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hostdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <rng supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtio-non-transitional</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>random</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>egd</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <filesystem supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='driverType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>path</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>handle</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>virtiofs</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </filesystem>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tpm supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-tis</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tpm-crb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emulator</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>external</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendVersion'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>2.0</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </tpm>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <redirdev supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='bus'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>usb</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </redirdev>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <channel supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </channel>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <crypto supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendModel'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>builtin</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </crypto>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <interface supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='backendType'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>default</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>passt</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <panic supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='model'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>isa</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>hyperv</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </panic>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <console supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='type'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>null</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vc</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pty</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dev</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>file</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>pipe</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stdio</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>udp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tcp</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>unix</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>qemu-vdagent</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>dbus</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <gic supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <vmcoreinfo supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <genid supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backingStoreInput supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <backup supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <async-teardown supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <s390-pv supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <ps2 supported='yes'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <tdx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sev supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <sgx supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <hyperv supported='yes'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <enum name='features'>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>relaxed</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vapic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>spinlocks</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vpindex</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>runtime</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>synic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>stimer</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reset</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>vendor_id</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>frequencies</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>reenlightenment</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>tlbflush</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>ipi</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>avic</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>emsr_bitmap</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <value>xmm_input</value>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </enum>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      <defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <spinlocks>4095</spinlocks>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <stimer_direct>on</stimer_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:      </defaults>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    </hyperv>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:    <launchSecurity supported='no'/>
Jan 22 04:48:07 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: </domainCapabilities>
Jan 22 04:48:07 np0005591760 nova_compute[248045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.132 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.133 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.133 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.133 248049 INFO nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Secure Boot support detected#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.134 248049 INFO nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.134 248049 INFO nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.140 248049 DEBUG nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.158 248049 INFO nova.virt.node [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Determined node identity 2b3e95f6-2954-4361-8d92-e808c4373b7f from /var/lib/nova/compute_id#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.167 248049 WARNING nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Compute nodes ['2b3e95f6-2954-4361-8d92-e808c4373b7f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.188 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.206 248049 WARNING nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.206 248049 DEBUG oslo_concurrency.lockutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.207 248049 DEBUG oslo_concurrency.lockutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.207 248049 DEBUG oslo_concurrency.lockutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.207 248049 DEBUG nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.207 248049 DEBUG oslo_concurrency.processutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:48:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:07 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:48:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816416169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.561 248049 DEBUG oslo_concurrency.processutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:48:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:07 np0005591760 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 04:48:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:07] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:07] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:07 np0005591760 systemd[1]: Started libvirt nodedev daemon.
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.927 248049 WARNING nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.928 248049 DEBUG nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4966MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.929 248049 DEBUG oslo_concurrency.lockutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.929 248049 DEBUG oslo_concurrency.lockutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.958 248049 WARNING nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] No compute node record for compute-0.ctlplane.example.com:2b3e95f6-2954-4361-8d92-e808c4373b7f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 2b3e95f6-2954-4361-8d92-e808c4373b7f could not be found.#033[00m
Jan 22 04:48:07 np0005591760 nova_compute[248045]: 2026-01-22 09:48:07.978 248049 INFO nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 2b3e95f6-2954-4361-8d92-e808c4373b7f#033[00m
Jan 22 04:48:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:07.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:08 np0005591760 nova_compute[248045]: 2026-01-22 09:48:08.015 248049 DEBUG nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:48:08 np0005591760 nova_compute[248045]: 2026-01-22 09:48:08.015 248049 DEBUG nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:48:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:08 np0005591760 nova_compute[248045]: 2026-01-22 09:48:08.438 248049 INFO nova.scheduler.client.report [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [req-7cde7f1d-3a16-4084-997c-6108ec4f9342] Created resource provider record via placement API for resource provider with UUID 2b3e95f6-2954-4361-8d92-e808c4373b7f and name compute-0.ctlplane.example.com.#033[00m
Jan 22 04:48:08 np0005591760 nova_compute[248045]: 2026-01-22 09:48:08.694 248049 DEBUG oslo_concurrency.processutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:48:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:48:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433155485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:48:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:48:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1186669867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.039 248049 DEBUG oslo_concurrency.processutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.043 248049 DEBUG nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 22 04:48:09 np0005591760 nova_compute[248045]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.043 248049 INFO nova.virt.libvirt.host [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.044 248049 DEBUG nova.compute.provider_tree [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.044 248049 DEBUG nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.110 248049 DEBUG nova.scheduler.client.report [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Updated inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.110 248049 DEBUG nova.compute.provider_tree [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Updating resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.111 248049 DEBUG nova.compute.provider_tree [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.186 248049 DEBUG nova.compute.provider_tree [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Updating resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.205 248049 DEBUG nova.compute.resource_tracker [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.205 248049 DEBUG oslo_concurrency.lockutils [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.205 248049 DEBUG nova.service [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.275 248049 DEBUG nova.service [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 22 04:48:09 np0005591760 nova_compute[248045]: 2026-01-22 09:48:09.275 248049 DEBUG nova.servicegroup.drivers.db [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 22 04:48:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:09 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:09.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:48:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:48:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:48:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:11.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:13.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69580bf270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:15.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:48:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:15.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69580bfdb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:17.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:17.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:17.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:17.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:17.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69580bfdb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:17] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:17] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:17.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:48:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:48:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:19.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954005340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:21.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:48:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:23 np0005591760 podman[248443]: 2026-01-22 09:48:23.051665722 +0000 UTC m=+0.040894037 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:48:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:23.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954005c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954006750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:25 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:48:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:48:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:26.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:48:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954006750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:27.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:27.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:27.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:27.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:27 np0005591760 podman[248464]: 2026-01-22 09:48:27.064126142 +0000 UTC m=+0.054271321 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 04:48:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:27.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:27] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:48:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:27] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:48:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:29 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c0057f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:30.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:30 np0005591760 nova_compute[248045]: 2026-01-22 09:48:30.277 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:48:30 np0005591760 nova_compute[248045]: 2026-01-22 09:48:30.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:48:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:31 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:48:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:48:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:48:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:48:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:33.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:48:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:34.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c0057f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:35 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:48:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:36.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:37.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:37.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:37.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:37.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:37 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693800d300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:37] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:37] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:48:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:38.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:48:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c0057f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:38.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:38.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:38.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:39 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:39 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:48:39 np0005591760 podman[248684]: 2026-01-22 09:48:39.94489579 +0000 UTC m=+0.028191145 container create 7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:48:39 np0005591760 systemd[1]: Started libpod-conmon-7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec.scope.
Jan 22 04:48:39 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:40 np0005591760 podman[248684]: 2026-01-22 09:48:40.003988227 +0000 UTC m=+0.087283601 container init 7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:48:40 np0005591760 podman[248684]: 2026-01-22 09:48:40.008656528 +0000 UTC m=+0.091951881 container start 7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:48:40 np0005591760 podman[248684]: 2026-01-22 09:48:40.010691272 +0000 UTC m=+0.093986626 container attach 7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 04:48:40 np0005591760 goofy_davinci[248698]: 167 167
Jan 22 04:48:40 np0005591760 podman[248684]: 2026-01-22 09:48:40.01253569 +0000 UTC m=+0.095831044 container died 7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_davinci, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:48:40 np0005591760 systemd[1]: libpod-7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec.scope: Deactivated successfully.
Jan 22 04:48:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:40.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6cb0bf4cd22dc5d98da4f218e2731161e2f7dad2cf8e82186ae9540d3af1278c-merged.mount: Deactivated successfully.
Jan 22 04:48:40 np0005591760 podman[248684]: 2026-01-22 09:48:39.933219509 +0000 UTC m=+0.016514884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:48:40 np0005591760 podman[248684]: 2026-01-22 09:48:40.040553146 +0000 UTC m=+0.123848500 container remove 7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_davinci, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:48:40 np0005591760 systemd[1]: libpod-conmon-7f926c21aec1c56b682f9d588ce7f425af63d2f66f42da2783620b5749d1e2ec.scope: Deactivated successfully.
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.163141779 +0000 UTC m=+0.030852882 container create d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_boyd, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:48:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:40 np0005591760 systemd[1]: Started libpod-conmon-d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909.scope.
Jan 22 04:48:40 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91aa3058d654ecdca2b96eec2b36de5c3ca609aee3d18e3c43173d1c2927d70e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91aa3058d654ecdca2b96eec2b36de5c3ca609aee3d18e3c43173d1c2927d70e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91aa3058d654ecdca2b96eec2b36de5c3ca609aee3d18e3c43173d1c2927d70e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91aa3058d654ecdca2b96eec2b36de5c3ca609aee3d18e3c43173d1c2927d70e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:40 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91aa3058d654ecdca2b96eec2b36de5c3ca609aee3d18e3c43173d1c2927d70e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.22807721 +0000 UTC m=+0.095788333 container init d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.234284522 +0000 UTC m=+0.101995625 container start d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.2353974 +0000 UTC m=+0.103108523 container attach d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_boyd, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.149393943 +0000 UTC m=+0.017105046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:48:40 np0005591760 nice_boyd[248733]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:48:40 np0005591760 nice_boyd[248733]: --> All data devices are unavailable
Jan 22 04:48:40 np0005591760 systemd[1]: libpod-d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909.scope: Deactivated successfully.
Jan 22 04:48:40 np0005591760 conmon[248733]: conmon d6f7a262887d87dc36d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909.scope/container/memory.events
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.494674236 +0000 UTC m=+0.362385340 container died d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:48:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-91aa3058d654ecdca2b96eec2b36de5c3ca609aee3d18e3c43173d1c2927d70e-merged.mount: Deactivated successfully.
Jan 22 04:48:40 np0005591760 podman[248720]: 2026-01-22 09:48:40.515442367 +0000 UTC m=+0.383153470 container remove d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:48:40 np0005591760 systemd[1]: libpod-conmon-d6f7a262887d87dc36d0a0ecb05d7396fd7562287ce2ffd191745c6e7cffe909.scope: Deactivated successfully.
Jan 22 04:48:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:40 np0005591760 podman[248841]: 2026-01-22 09:48:40.923362483 +0000 UTC m=+0.029666215 container create 4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gagarin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 04:48:40 np0005591760 systemd[1]: Started libpod-conmon-4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb.scope.
Jan 22 04:48:40 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:40 np0005591760 podman[248841]: 2026-01-22 09:48:40.983408737 +0000 UTC m=+0.089712470 container init 4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 04:48:40 np0005591760 podman[248841]: 2026-01-22 09:48:40.987705817 +0000 UTC m=+0.094009549 container start 4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gagarin, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:48:40 np0005591760 podman[248841]: 2026-01-22 09:48:40.988883889 +0000 UTC m=+0.095187621 container attach 4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:48:40 np0005591760 heuristic_gagarin[248854]: 167 167
Jan 22 04:48:40 np0005591760 systemd[1]: libpod-4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb.scope: Deactivated successfully.
Jan 22 04:48:40 np0005591760 conmon[248854]: conmon 4ac4b30d13c11305a983 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb.scope/container/memory.events
Jan 22 04:48:40 np0005591760 podman[248841]: 2026-01-22 09:48:40.991232185 +0000 UTC m=+0.097535917 container died 4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:48:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-58978de0ee8ee5dcb0901991d8e306975aba7284905341b7ac9e36fafba0d507-merged.mount: Deactivated successfully.
Jan 22 04:48:41 np0005591760 podman[248841]: 2026-01-22 09:48:40.910796675 +0000 UTC m=+0.017100417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:48:41 np0005591760 podman[248841]: 2026-01-22 09:48:41.008134058 +0000 UTC m=+0.114437789 container remove 4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gagarin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:48:41 np0005591760 systemd[1]: libpod-conmon-4ac4b30d13c11305a983974bc3bb5d8946b3a69672941150b2c458619595ceeb.scope: Deactivated successfully.
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.124393459 +0000 UTC m=+0.026423452 container create 6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:48:41 np0005591760 systemd[1]: Started libpod-conmon-6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e.scope.
Jan 22 04:48:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ffbaf2e3cc688c9e5780addd8e07937a76d8157bbc22e8076bea40c538c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ffbaf2e3cc688c9e5780addd8e07937a76d8157bbc22e8076bea40c538c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ffbaf2e3cc688c9e5780addd8e07937a76d8157bbc22e8076bea40c538c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ffbaf2e3cc688c9e5780addd8e07937a76d8157bbc22e8076bea40c538c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.175956272 +0000 UTC m=+0.077986265 container init 6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.183622723 +0000 UTC m=+0.085652716 container start 6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chatterjee, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.184702409 +0000 UTC m=+0.086732402 container attach 6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.113268407 +0000 UTC m=+0.015298420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]: {
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:    "0": [
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:        {
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "devices": [
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "/dev/loop3"
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            ],
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "lv_name": "ceph_lv0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "lv_size": "21470642176",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "name": "ceph_lv0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "tags": {
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.cluster_name": "ceph",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.crush_device_class": "",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.encrypted": "0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.osd_id": "0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.type": "block",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.vdo": "0",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:                "ceph.with_tpm": "0"
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            },
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "type": "block",
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:            "vg_name": "ceph_vg0"
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:        }
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]:    ]
Jan 22 04:48:41 np0005591760 compassionate_chatterjee[248888]: }
Jan 22 04:48:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:41.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:41 np0005591760 systemd[1]: libpod-6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e.scope: Deactivated successfully.
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.409280875 +0000 UTC m=+0.311310878 container died 6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chatterjee, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:48:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1481ffbaf2e3cc688c9e5780addd8e07937a76d8157bbc22e8076bea40c538c6-merged.mount: Deactivated successfully.
Jan 22 04:48:41 np0005591760 podman[248875]: 2026-01-22 09:48:41.432217945 +0000 UTC m=+0.334247937 container remove 6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chatterjee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:48:41 np0005591760 systemd[1]: libpod-conmon-6c443dbd7f9301bea0ebbe645ca297b73886b6565f4548df0e1cfffe067ff91e.scope: Deactivated successfully.
Jan 22 04:48:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:41 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540082b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 427 B/s rd, 0 op/s
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.833931781 +0000 UTC m=+0.027168727 container create 6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:48:41 np0005591760 systemd[1]: Started libpod-conmon-6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be.scope.
Jan 22 04:48:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.876515165 +0000 UTC m=+0.069752111 container init 6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_burnell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.880817737 +0000 UTC m=+0.074054673 container start 6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_burnell, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.881744715 +0000 UTC m=+0.074981650 container attach 6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_burnell, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:48:41 np0005591760 competent_burnell[249001]: 167 167
Jan 22 04:48:41 np0005591760 systemd[1]: libpod-6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be.scope: Deactivated successfully.
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.883617904 +0000 UTC m=+0.076854841 container died 6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:48:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c5193a140ab2f0a52ac3713094abf771679679c47a0c9b672565577554b7db61-merged.mount: Deactivated successfully.
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.898310422 +0000 UTC m=+0.091547358 container remove 6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_burnell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:48:41 np0005591760 podman[248988]: 2026-01-22 09:48:41.822858376 +0000 UTC m=+0.016095312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:48:41 np0005591760 systemd[1]: libpod-conmon-6cdd9f8464c3c414e605f981e70724d4af2f61b2b1ec4e4e3eb2689391af85be.scope: Deactivated successfully.
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.014649094 +0000 UTC m=+0.026469037 container create d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:48:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:42.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:42 np0005591760 systemd[1]: Started libpod-conmon-d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c.scope.
Jan 22 04:48:42 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:48:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aba2564fb7d15d35e158ed68c71ced0b12b533d3cdbe42e035f7b865e565c4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aba2564fb7d15d35e158ed68c71ced0b12b533d3cdbe42e035f7b865e565c4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aba2564fb7d15d35e158ed68c71ced0b12b533d3cdbe42e035f7b865e565c4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aba2564fb7d15d35e158ed68c71ced0b12b533d3cdbe42e035f7b865e565c4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.068619745 +0000 UTC m=+0.080439689 container init d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.07650507 +0000 UTC m=+0.088325014 container start d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.07753955 +0000 UTC m=+0.089359494 container attach d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cohen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.004165722 +0000 UTC m=+0.015985666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:48:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960003210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:42 np0005591760 pensive_cohen[249036]: {}
Jan 22 04:48:42 np0005591760 lvm[249112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:48:42 np0005591760 lvm[249112]: VG ceph_vg0 finished
Jan 22 04:48:42 np0005591760 systemd[1]: libpod-d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c.scope: Deactivated successfully.
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.54686355 +0000 UTC m=+0.558683494 container died d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:48:42 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0aba2564fb7d15d35e158ed68c71ced0b12b533d3cdbe42e035f7b865e565c4c-merged.mount: Deactivated successfully.
Jan 22 04:48:42 np0005591760 lvm[249116]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:48:42 np0005591760 lvm[249116]: VG ceph_vg0 finished
Jan 22 04:48:42 np0005591760 podman[249023]: 2026-01-22 09:48:42.569515581 +0000 UTC m=+0.581335525 container remove d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:48:42 np0005591760 systemd[1]: libpod-conmon-d41d5321a3a9b2c72cc796e763e7e21d05d0e332047f1f042bde830c1c9fbc3c.scope: Deactivated successfully.
Jan 22 04:48:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:48:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:48:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960003210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:48:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:43.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:48:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:43 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Jan 22 04:48:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:48:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:48:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:44.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:48:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960003210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:45.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Jan 22 04:48:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:46.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f693c005990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:47.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:47.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:48:47.303 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:48:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:48:47.303 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:48:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:48:47.304 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:48:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:47.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:47 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Jan 22 04:48:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:47] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:47] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:48:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:48.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:48.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:48.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:48:49
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'vms']
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:48:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:49.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:49 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540082b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:48:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.649806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075329649831, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4100, "num_deletes": 502, "total_data_size": 8247796, "memory_usage": 8353280, "flush_reason": "Manual Compaction"}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075329666019, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 7991214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13311, "largest_seqno": 17409, "table_properties": {"data_size": 7973990, "index_size": 11542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4613, "raw_key_size": 36130, "raw_average_key_size": 19, "raw_value_size": 7938239, "raw_average_value_size": 4335, "num_data_blocks": 504, "num_entries": 1831, "num_filter_entries": 1831, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074908, "oldest_key_time": 1769074908, "file_creation_time": 1769075329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16247 microseconds, and 9707 cpu microseconds.
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.666054) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 7991214 bytes OK
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.666079) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.666815) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.666827) EVENT_LOG_v1 {"time_micros": 1769075329666824, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.666835) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8231302, prev total WAL file size 8231302, number of live WAL files 2.
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.668187) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(7803KB)], [32(11MB)]
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075329668206, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20163644, "oldest_snapshot_seqno": -1}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5135 keys, 15236213 bytes, temperature: kUnknown
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075329703267, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15236213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15197681, "index_size": 24598, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 128478, "raw_average_key_size": 25, "raw_value_size": 15100451, "raw_average_value_size": 2940, "num_data_blocks": 1034, "num_entries": 5135, "num_filter_entries": 5135, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.703546) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15236213 bytes
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.703906) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 572.2 rd, 432.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.6, 11.6 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6158, records dropped: 1023 output_compression: NoCompression
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.703919) EVENT_LOG_v1 {"time_micros": 1769075329703914, "job": 14, "event": "compaction_finished", "compaction_time_micros": 35238, "compaction_time_cpu_micros": 21671, "output_level": 6, "num_output_files": 1, "total_output_size": 15236213, "num_input_records": 6158, "num_output_records": 5135, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075329705108, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075329706582, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.668147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.706624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.706628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.706630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.706631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:48:49 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:48:49.706632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:48:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964002630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:51.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:51 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:48:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:48:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:48:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:53.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:54.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:54 np0005591760 podman[249188]: 2026-01-22 09:48:54.088350791 +0000 UTC m=+0.073800222 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:48:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954007990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:55.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:48:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:57.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:57.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:57.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:57.159Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:48:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:57.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:48:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:57 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:48:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:57] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:48:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:48:57] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:48:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:48:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:48:58.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:48:58 np0005591760 podman[249208]: 2026-01-22 09:48:58.076231597 +0000 UTC m=+0.059269142 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:48:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:48:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:58.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:58.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:48:58.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:48:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:48:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:48:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:48:59.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:48:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:48:59 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af20123e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:48:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540082b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:01 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540082b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 22 04:49:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:02.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540082b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:03 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:49:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:04.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:49:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540082b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:05.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:05 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:49:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.322 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.322 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.322 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.322 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.323 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.323 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.323 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.323 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.323 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.350 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.350 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.350 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.350 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.351 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:49:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:49:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145902328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.691 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:49:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.888 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.889 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4950MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.889 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.889 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.956 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.957 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:49:06 np0005591760 nova_compute[248045]: 2026-01-22 09:49:06.987 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:49:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:07.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:07.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:07.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:07.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:49:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804162205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:49:07 np0005591760 nova_compute[248045]: 2026-01-22 09:49:07.330 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:49:07 np0005591760 nova_compute[248045]: 2026-01-22 09:49:07.334 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:49:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:07.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:07 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:07] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:49:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:07] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:49:07 np0005591760 nova_compute[248045]: 2026-01-22 09:49:07.606 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:49:07 np0005591760 nova_compute[248045]: 2026-01-22 09:49:07.607 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:49:07 np0005591760 nova_compute[248045]: 2026-01-22 09:49:07.608 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:49:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:08.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094908 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:49:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24817 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24814 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24817 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:08.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:08.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:08.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:09.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:09 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:10.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:11.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:49:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:12.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640051b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:13.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:49:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69600050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:49:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:15.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:49:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:49:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:49:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:16.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:17.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:17.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:17.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:17.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:17.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:49:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:17] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:49:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:17] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:49:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:18.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:18.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:49:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:49:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:19.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:49:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:20.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:21.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:49:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:49:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:22.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:23.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 22 04:49:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:24.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:25 np0005591760 podman[249328]: 2026-01-22 09:49:25.045339337 +0000 UTC m=+0.036079294 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Jan 22 04:49:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:25.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:25 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:49:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:26.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964005330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:27.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:27.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:49:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:27] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:49:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:27] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:49:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/094928 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:49:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:28.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:28.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:28.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:28.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:29 np0005591760 podman[249348]: 2026-01-22 09:49:29.057584972 +0000 UTC m=+0.053498482 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24835 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24830 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 22 04:49:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.24835 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 22 04:49:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:29 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69640054d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:49:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:30.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540085d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:31.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:31 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 22 04:49:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540085d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:49:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6970011970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697001d9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:35.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:35 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f696c0a7100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 22 04:49:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:36.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6978013850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:37.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:37.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:37.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:37.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:37.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:37 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697001d9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:37] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:49:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:37] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:49:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:38.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:38.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:38.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:39.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:39 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:40.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697001d9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697801e800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:41 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:42.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697001d9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 0 op/s
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:43.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:43 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697801e800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:43 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.756806524 +0000 UTC m=+0.029679628 container create d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:49:43 np0005591760 systemd[1]: Started libpod-conmon-d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0.scope.
Jan 22 04:49:43 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.815527596 +0000 UTC m=+0.088400720 container init d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.819751502 +0000 UTC m=+0.092624606 container start d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.820834163 +0000 UTC m=+0.093707267 container attach d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mendeleev, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:49:43 np0005591760 epic_mendeleev[249587]: 167 167
Jan 22 04:49:43 np0005591760 systemd[1]: libpod-d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0.scope: Deactivated successfully.
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.824076638 +0000 UTC m=+0.096949741 container died d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:49:43 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a99410de1fd06b08ee6d46e5fe99332518afac5107abcd1cdfc71f52c71c9afd-merged.mount: Deactivated successfully.
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.745458996 +0000 UTC m=+0.018332120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:49:43 np0005591760 podman[249574]: 2026-01-22 09:49:43.849119782 +0000 UTC m=+0.121992885 container remove d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mendeleev, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:49:43 np0005591760 systemd[1]: libpod-conmon-d327e8318b0ba20db1e3272e6a45ad87c1431fb068a60ae3218b66c81b2e00c0.scope: Deactivated successfully.
Jan 22 04:49:43 np0005591760 podman[249610]: 2026-01-22 09:49:43.97009577 +0000 UTC m=+0.029366449 container create 7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:49:43 np0005591760 systemd[1]: Started libpod-conmon-7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10.scope.
Jan 22 04:49:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:49:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531f74652826eb7ddd0095e74d521fa671641cae9244dd74e5f6fefad39b8b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531f74652826eb7ddd0095e74d521fa671641cae9244dd74e5f6fefad39b8b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531f74652826eb7ddd0095e74d521fa671641cae9244dd74e5f6fefad39b8b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531f74652826eb7ddd0095e74d521fa671641cae9244dd74e5f6fefad39b8b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531f74652826eb7ddd0095e74d521fa671641cae9244dd74e5f6fefad39b8b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:44 np0005591760 podman[249610]: 2026-01-22 09:49:44.038492768 +0000 UTC m=+0.097763467 container init 7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_driscoll, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:49:44 np0005591760 podman[249610]: 2026-01-22 09:49:44.043195135 +0000 UTC m=+0.102465814 container start 7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_driscoll, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:49:44 np0005591760 podman[249610]: 2026-01-22 09:49:44.044806453 +0000 UTC m=+0.104077132 container attach 7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:49:44 np0005591760 podman[249610]: 2026-01-22 09:49:43.9588167 +0000 UTC m=+0.018087399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:49:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:44.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:44 np0005591760 serene_driscoll[249623]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:49:44 np0005591760 serene_driscoll[249623]: --> All data devices are unavailable
Jan 22 04:49:44 np0005591760 systemd[1]: libpod-7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10.scope: Deactivated successfully.
Jan 22 04:49:44 np0005591760 podman[249638]: 2026-01-22 09:49:44.334764736 +0000 UTC m=+0.017781091 container died 7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:49:44 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9531f74652826eb7ddd0095e74d521fa671641cae9244dd74e5f6fefad39b8b2-merged.mount: Deactivated successfully.
Jan 22 04:49:44 np0005591760 podman[249638]: 2026-01-22 09:49:44.355691688 +0000 UTC m=+0.038708024 container remove 7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:49:44 np0005591760 systemd[1]: libpod-conmon-7f81b4ae15df6a8be4e9ba21643f3e424623b356d7c08261c1b2290df0450f10.scope: Deactivated successfully.
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.780913771 +0000 UTC m=+0.037153913 container create 03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:49:44 np0005591760 systemd[1]: Started libpod-conmon-03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9.scope.
Jan 22 04:49:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.837223014 +0000 UTC m=+0.093463146 container init 03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.841836324 +0000 UTC m=+0.098076456 container start 03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.843259718 +0000 UTC m=+0.099499850 container attach 03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_torvalds, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:49:44 np0005591760 optimistic_torvalds[249745]: 167 167
Jan 22 04:49:44 np0005591760 systemd[1]: libpod-03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9.scope: Deactivated successfully.
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.845571588 +0000 UTC m=+0.101811740 container died 03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_torvalds, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 22 04:49:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:44 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ee2739fb8dd105166d03b48fef38988feb04b3cb9c0a2a3e1bdaf69bfd2b48d2-merged.mount: Deactivated successfully.
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.768119995 +0000 UTC m=+0.024360137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:49:44 np0005591760 podman[249732]: 2026-01-22 09:49:44.866197783 +0000 UTC m=+0.122437915 container remove 03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:49:44 np0005591760 systemd[1]: libpod-conmon-03005b3f18be88ae589e2a0075cbdd318bfa73e389f2e5462c60d7ece2f149e9.scope: Deactivated successfully.
Jan 22 04:49:44 np0005591760 podman[249766]: 2026-01-22 09:49:44.986555344 +0000 UTC m=+0.030426377 container create ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:49:45 np0005591760 systemd[1]: Started libpod-conmon-ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8.scope.
Jan 22 04:49:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9563fae17bc3f8f677216ebfc328963967b7d9a23f36d25d0ee43ceda45edb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9563fae17bc3f8f677216ebfc328963967b7d9a23f36d25d0ee43ceda45edb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9563fae17bc3f8f677216ebfc328963967b7d9a23f36d25d0ee43ceda45edb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9563fae17bc3f8f677216ebfc328963967b7d9a23f36d25d0ee43ceda45edb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 podman[249766]: 2026-01-22 09:49:45.048116832 +0000 UTC m=+0.091987874 container init ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:49:45 np0005591760 podman[249766]: 2026-01-22 09:49:45.052799553 +0000 UTC m=+0.096670585 container start ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:49:45 np0005591760 podman[249766]: 2026-01-22 09:49:45.055141509 +0000 UTC m=+0.099012542 container attach ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:49:45 np0005591760 podman[249766]: 2026-01-22 09:49:44.975047534 +0000 UTC m=+0.018918576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]: {
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:    "0": [
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:        {
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "devices": [
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "/dev/loop3"
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            ],
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "lv_name": "ceph_lv0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "lv_size": "21470642176",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "name": "ceph_lv0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "tags": {
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.cluster_name": "ceph",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.crush_device_class": "",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.encrypted": "0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.osd_id": "0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.type": "block",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.vdo": "0",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:                "ceph.with_tpm": "0"
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            },
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "type": "block",
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:            "vg_name": "ceph_vg0"
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:        }
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]:    ]
Jan 22 04:49:45 np0005591760 vibrant_haibt[249779]: }
Jan 22 04:49:45 np0005591760 systemd[1]: libpod-ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8.scope: Deactivated successfully.
Jan 22 04:49:45 np0005591760 podman[249766]: 2026-01-22 09:49:45.29538912 +0000 UTC m=+0.339260162 container died ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:49:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c9563fae17bc3f8f677216ebfc328963967b7d9a23f36d25d0ee43ceda45edb2-merged.mount: Deactivated successfully.
Jan 22 04:49:45 np0005591760 podman[249766]: 2026-01-22 09:49:45.31627175 +0000 UTC m=+0.360142782 container remove ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haibt, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:49:45 np0005591760 systemd[1]: libpod-conmon-ccde746171d5b86d6899ea8a8b540a13663f01c071b56e6209e39302bcb4dbe8.scope: Deactivated successfully.
Jan 22 04:49:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Jan 22 04:49:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:49:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:45.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:49:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697001d9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.726557987 +0000 UTC m=+0.028845346 container create 9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:49:45 np0005591760 systemd[1]: Started libpod-conmon-9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a.scope.
Jan 22 04:49:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.777973974 +0000 UTC m=+0.080261343 container init 9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.782880035 +0000 UTC m=+0.085167394 container start 9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.784135914 +0000 UTC m=+0.086423283 container attach 9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:49:45 np0005591760 nervous_roentgen[249894]: 167 167
Jan 22 04:49:45 np0005591760 systemd[1]: libpod-9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a.scope: Deactivated successfully.
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.786523666 +0000 UTC m=+0.088811025 container died 9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_roentgen, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 22 04:49:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4c8d221fc5ef8f8c81b957004aa40b9c25fe7491a73b021bbb3c29468aa38041-merged.mount: Deactivated successfully.
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.803674338 +0000 UTC m=+0.105961697 container remove 9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_roentgen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:49:45 np0005591760 podman[249881]: 2026-01-22 09:49:45.715409855 +0000 UTC m=+0.017697234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:49:45 np0005591760 systemd[1]: libpod-conmon-9742527895c45b5a6ff6612990dc649fa416378f28ff8f62e625a8d1e282b58a.scope: Deactivated successfully.
Jan 22 04:49:45 np0005591760 podman[249916]: 2026-01-22 09:49:45.920529385 +0000 UTC m=+0.027480772 container create e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 22 04:49:45 np0005591760 systemd[1]: Started libpod-conmon-e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06.scope.
Jan 22 04:49:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d815cdb319e228bd323d7cb99b773964e26789f883b2bc8c3a3c29bab9ead/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d815cdb319e228bd323d7cb99b773964e26789f883b2bc8c3a3c29bab9ead/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d815cdb319e228bd323d7cb99b773964e26789f883b2bc8c3a3c29bab9ead/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d815cdb319e228bd323d7cb99b773964e26789f883b2bc8c3a3c29bab9ead/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:49:45 np0005591760 podman[249916]: 2026-01-22 09:49:45.980476728 +0000 UTC m=+0.087428136 container init e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:49:45 np0005591760 podman[249916]: 2026-01-22 09:49:45.987024916 +0000 UTC m=+0.093976303 container start e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:49:45 np0005591760 podman[249916]: 2026-01-22 09:49:45.988114783 +0000 UTC m=+0.095066169 container attach e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:49:46 np0005591760 podman[249916]: 2026-01-22 09:49:45.909827634 +0000 UTC m=+0.016779041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:49:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:46.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697801e800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:46 np0005591760 lvm[250006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:49:46 np0005591760 lvm[250006]: VG ceph_vg0 finished
Jan 22 04:49:46 np0005591760 exciting_moore[249930]: {}
Jan 22 04:49:46 np0005591760 systemd[1]: libpod-e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06.scope: Deactivated successfully.
Jan 22 04:49:46 np0005591760 podman[249916]: 2026-01-22 09:49:46.504797244 +0000 UTC m=+0.611748631 container died e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:49:46 np0005591760 systemd[1]: var-lib-containers-storage-overlay-298d815cdb319e228bd323d7cb99b773964e26789f883b2bc8c3a3c29bab9ead-merged.mount: Deactivated successfully.
Jan 22 04:49:46 np0005591760 podman[249916]: 2026-01-22 09:49:46.525999846 +0000 UTC m=+0.632951233 container remove e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:49:46 np0005591760 systemd[1]: libpod-conmon-e5d2d0e893d628953dc2392a254682d1a15ed9b74b32c8e99b4a8d1706500e06.scope: Deactivated successfully.
Jan 22 04:49:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:49:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:49:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:46 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:46 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:49:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:47.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:49:47.304 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:49:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:49:47.304 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:49:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:49:47.304 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:49:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 0 op/s
Jan 22 04:49:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:47.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:47 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:47] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:49:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:47] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:49:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:48.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697001d9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:48.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:48.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:48.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:48.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:49:49
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', '.nfs', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images']
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 0 op/s
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:49:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:49.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:49:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:49:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:49 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:50.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 0 op/s
Jan 22 04:49:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:51 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:52.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 0 op/s
Jan 22 04:49:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:49:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:53.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:49:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:54.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:49:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:55.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:56 np0005591760 podman[250079]: 2026-01-22 09:49:56.042233103 +0000 UTC m=+0.034551456 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 04:49:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:56.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:57.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:57.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:57.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:57.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:57.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:57 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:57] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:49:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:49:57] "GET /metrics HTTP/1.1" 200 48486 "" "Prometheus/2.51.0"
Jan 22 04:49:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:49:58.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:49:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:58.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:49:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:58.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:58.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:49:58.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:49:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:49:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:49:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:49:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:49:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:49:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:49:59 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Jan 22 04:50:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 22 04:50:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.pszzrs on compute-1 is in unknown state
Jan 22 04:50:00 np0005591760 podman[250099]: 2026-01-22 09:50:00.062398986 +0000 UTC m=+0.055842034 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 22 04:50:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:00.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:00 np0005591760 ceph-mon[74254]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Jan 22 04:50:00 np0005591760 ceph-mon[74254]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 22 04:50:00 np0005591760 ceph-mon[74254]:    daemon nfs.cephfs.0.0.compute-1.pszzrs on compute-1 is in unknown state
Jan 22 04:50:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:01.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:01 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:03.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:03 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:04.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69780303f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:50:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:05.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:05 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:06.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:07.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:07.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:07.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:07.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:07 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:07] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 22 04:50:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:07] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.604 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.604 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.619 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.619 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.620 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.626 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.626 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.627 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.643 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.644 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.644 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.644 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:50:07 np0005591760 nova_compute[248045]: 2026-01-22 09:50:07.644 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:50:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:50:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273401906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.018 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:50:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:08.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.245 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.246 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4935MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.247 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.247 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:50:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.292 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.292 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.306 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.348225) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075408348266, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 250, "total_data_size": 1477980, "memory_usage": 1496312, "flush_reason": "Manual Compaction"}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075408352318, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 920394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17410, "largest_seqno": 18339, "table_properties": {"data_size": 916624, "index_size": 1422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9891, "raw_average_key_size": 20, "raw_value_size": 908526, "raw_average_value_size": 1865, "num_data_blocks": 62, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075330, "oldest_key_time": 1769075330, "file_creation_time": 1769075408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 4144 microseconds, and 3104 cpu microseconds.
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.352368) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 920394 bytes OK
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.352388) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.352756) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.352768) EVENT_LOG_v1 {"time_micros": 1769075408352765, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.352799) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1473596, prev total WAL file size 1473596, number of live WAL files 2.
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.353467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(898KB)], [35(14MB)]
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075408353506, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16156607, "oldest_snapshot_seqno": -1}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 5141 keys, 12606827 bytes, temperature: kUnknown
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075408384773, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12606827, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12571873, "index_size": 21020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 129018, "raw_average_key_size": 25, "raw_value_size": 12478121, "raw_average_value_size": 2427, "num_data_blocks": 876, "num_entries": 5141, "num_filter_entries": 5141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.384913) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12606827 bytes
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.385262) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 515.8 rd, 402.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.5 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(31.3) write-amplify(13.7) OK, records in: 5622, records dropped: 481 output_compression: NoCompression
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.385275) EVENT_LOG_v1 {"time_micros": 1769075408385269, "job": 16, "event": "compaction_finished", "compaction_time_micros": 31322, "compaction_time_cpu_micros": 19249, "output_level": 6, "num_output_files": 1, "total_output_size": 12606827, "num_input_records": 5622, "num_output_records": 5141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075408385449, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075408387126, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.353375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.387254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.387260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.387262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.387266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:50:08.387268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:50:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996183729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.690 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.694 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.705 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.707 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:50:08 np0005591760 nova_compute[248045]: 2026-01-22 09:50:08.707 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:50:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:08.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:08.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:08.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:08.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:09.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:09 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:10.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c004fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:12.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:50:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:14.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:50:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:50:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:15.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:16.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:17.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:17.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:17.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:17.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:17 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:50:17.223 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:50:17 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:50:17.224 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:50:17 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:50:17.225 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:50:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:17.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6960005e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:17] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 22 04:50:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:17] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 22 04:50:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:18.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:18.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:18.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:18.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:50:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:50:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:19.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:50:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:50:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:21.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6984002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6988004fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000020s ======
Jan 22 04:50:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Jan 22 04:50:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:50:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:25.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:25 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6984003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6984003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:27.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:27.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:27.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:27.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:27 np0005591760 podman[250222]: 2026-01-22 09:50:27.096602537 +0000 UTC m=+0.078343495 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 04:50:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c0063e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:27] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:50:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:27] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:50:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:28.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:28.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:28.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:28.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:28.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:29 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6984003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:30.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c0063e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:31 np0005591760 podman[250242]: 2026-01-22 09:50:31.090801656 +0000 UTC m=+0.073222420 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:50:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:31.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:31 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:32.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:33.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:34.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:50:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:50:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:50:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:35 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:36.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:37.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:37.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:37.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:37.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:37 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:37] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:50:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:37] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:50:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:38.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:38.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:38.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:38.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:39.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:39 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:50:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:40.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:50:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:41 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:42.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:43.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:43 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:44.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:50:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:46.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:47.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:50:47.305 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:50:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:50:47.307 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:50:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:50:47.307 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:50:47 np0005591760 podman[250415]: 2026-01-22 09:50:47.323159989 +0000 UTC m=+0.046825150 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:50:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:47 np0005591760 podman[250415]: 2026-01-22 09:50:47.402925635 +0000 UTC m=+0.126590795 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:50:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:47.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:47 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:47] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:50:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:47] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:50:47 np0005591760 podman[250512]: 2026-01-22 09:50:47.759826193 +0000 UTC m=+0.045160703 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:50:47 np0005591760 podman[250512]: 2026-01-22 09:50:47.771089783 +0000 UTC m=+0.056424273 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:50:48 np0005591760 podman[250597]: 2026-01-22 09:50:48.06460946 +0000 UTC m=+0.043789266 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:50:48 np0005591760 podman[250597]: 2026-01-22 09:50:48.084660442 +0000 UTC m=+0.063840247 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:50:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:48.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:48 np0005591760 podman[250656]: 2026-01-22 09:50:48.247997751 +0000 UTC m=+0.037644127 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:50:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:48 np0005591760 podman[250656]: 2026-01-22 09:50:48.411243537 +0000 UTC m=+0.200889914 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 04:50:48 np0005591760 podman[250716]: 2026-01-22 09:50:48.576418873 +0000 UTC m=+0.036515579 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:50:48 np0005591760 podman[250716]: 2026-01-22 09:50:48.59398072 +0000 UTC m=+0.054077417 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 04:50:48 np0005591760 podman[250772]: 2026-01-22 09:50:48.755858668 +0000 UTC m=+0.036231434 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc.)
Jan 22 04:50:48 np0005591760 podman[250772]: 2026-01-22 09:50:48.781947785 +0000 UTC m=+0.062320551 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 04:50:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:48.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:48.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:48.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:48.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:48 np0005591760 podman[250826]: 2026-01-22 09:50:48.95110581 +0000 UTC m=+0.036797801 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:50:48 np0005591760 podman[250826]: 2026-01-22 09:50:48.984303973 +0000 UTC m=+0.069995954 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 04:50:49 np0005591760 podman[250873]: 2026-01-22 09:50:49.110912372 +0000 UTC m=+0.036959075 container exec bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:50:49 np0005591760 podman[250891]: 2026-01-22 09:50:49.175891395 +0000 UTC m=+0.047743792 container exec_died bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 04:50:49 np0005591760 podman[250873]: 2026-01-22 09:50:49.179582586 +0000 UTC m=+0.105629298 container exec_died bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:50:49
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.nfs', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:50:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:49.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:49 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:50:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:50:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:50.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.302233809 +0000 UTC m=+0.029141494 container create 36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:50:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6974005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:50 np0005591760 systemd[1]: Started libpod-conmon-36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50.scope.
Jan 22 04:50:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.358188514 +0000 UTC m=+0.085096209 container init 36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.364153362 +0000 UTC m=+0.091061037 container start 36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 04:50:50 np0005591760 festive_kapitsa[251104]: 167 167
Jan 22 04:50:50 np0005591760 systemd[1]: libpod-36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50.scope: Deactivated successfully.
Jan 22 04:50:50 np0005591760 conmon[251104]: conmon 36f4aea33fac2e514d4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50.scope/container/memory.events
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.370854479 +0000 UTC m=+0.097762154 container attach 36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.371490198 +0000 UTC m=+0.098397872 container died 36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.289489687 +0000 UTC m=+0.016397382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:50:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay-35be8b53785d72bbf0e2e067b8a8efa3f8d21602c55d9593be01f4df50ef8852-merged.mount: Deactivated successfully.
Jan 22 04:50:50 np0005591760 podman[251091]: 2026-01-22 09:50:50.398362481 +0000 UTC m=+0.125270156 container remove 36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 22 04:50:50 np0005591760 systemd[1]: libpod-conmon-36f4aea33fac2e514d4b0e7c1d6cb0659aa4edb177b73395b1f1e2b31c666a50.scope: Deactivated successfully.
Jan 22 04:50:50 np0005591760 podman[251126]: 2026-01-22 09:50:50.521466502 +0000 UTC m=+0.030365313 container create 5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:50:50 np0005591760 systemd[1]: Started libpod-conmon-5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed.scope.
Jan 22 04:50:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:50:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feac5242d627322be040e2350f7cae0950f09baac240bb36db4442c3c731296/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feac5242d627322be040e2350f7cae0950f09baac240bb36db4442c3c731296/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feac5242d627322be040e2350f7cae0950f09baac240bb36db4442c3c731296/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feac5242d627322be040e2350f7cae0950f09baac240bb36db4442c3c731296/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feac5242d627322be040e2350f7cae0950f09baac240bb36db4442c3c731296/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:50 np0005591760 podman[251126]: 2026-01-22 09:50:50.580025748 +0000 UTC m=+0.088924569 container init 5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:50:50 np0005591760 podman[251126]: 2026-01-22 09:50:50.58592332 +0000 UTC m=+0.094822130 container start 5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:50:50 np0005591760 podman[251126]: 2026-01-22 09:50:50.587090762 +0000 UTC m=+0.095989572 container attach 5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:50:50 np0005591760 podman[251126]: 2026-01-22 09:50:50.508517714 +0000 UTC m=+0.017416525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:50:50 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:50:50 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:50 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:50 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:50:50 np0005591760 crazy_varahamihira[251139]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:50:50 np0005591760 crazy_varahamihira[251139]: --> All data devices are unavailable
Jan 22 04:50:50 np0005591760 systemd[1]: libpod-5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed.scope: Deactivated successfully.
Jan 22 04:50:50 np0005591760 podman[251155]: 2026-01-22 09:50:50.889704488 +0000 UTC m=+0.017592665 container died 5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:50:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5feac5242d627322be040e2350f7cae0950f09baac240bb36db4442c3c731296-merged.mount: Deactivated successfully.
Jan 22 04:50:50 np0005591760 podman[251155]: 2026-01-22 09:50:50.911439444 +0000 UTC m=+0.039327623 container remove 5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:50:50 np0005591760 systemd[1]: libpod-conmon-5833d28bc648156cc3f2018b088644b5d00f24acbec846b002ff2a9cf8086aed.scope: Deactivated successfully.
Jan 22 04:50:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 04:50:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1745700298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 04:50:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 04:50:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1745700298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.368702908 +0000 UTC m=+0.030806963 container create e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:50:51 np0005591760 systemd[1]: Started libpod-conmon-e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9.scope.
Jan 22 04:50:51 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.41849357 +0000 UTC m=+0.080597646 container init e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.42346789 +0000 UTC m=+0.085571946 container start e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.426201385 +0000 UTC m=+0.088305441 container attach e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:50:51 np0005591760 silly_archimedes[251261]: 167 167
Jan 22 04:50:51 np0005591760 systemd[1]: libpod-e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9.scope: Deactivated successfully.
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.427531502 +0000 UTC m=+0.089635558 container died e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:50:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fc0048ce96e3d05ab4fdd52c7cc39d2629b5ea2742a5a22f1c5339323fd8992b-merged.mount: Deactivated successfully.
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.447224809 +0000 UTC m=+0.109328865 container remove e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:50:51 np0005591760 podman[251248]: 2026-01-22 09:50:51.355320292 +0000 UTC m=+0.017424368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:50:51 np0005591760 systemd[1]: libpod-conmon-e05d7aeb26b415f9bb045a203185528ffb7ac9d75a9f49f7618ee6459b07b4c9.scope: Deactivated successfully.
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.571621506 +0000 UTC m=+0.028285509 container create c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_curie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:50:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:51.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:51 np0005591760 systemd[1]: Started libpod-conmon-c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf.scope.
Jan 22 04:50:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:51 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:51 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:50:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1e72f24cea15292e8afabac35e733f26466771de15a00a67b1774280202082/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1e72f24cea15292e8afabac35e733f26466771de15a00a67b1774280202082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1e72f24cea15292e8afabac35e733f26466771de15a00a67b1774280202082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1e72f24cea15292e8afabac35e733f26466771de15a00a67b1774280202082/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.631569352 +0000 UTC m=+0.088233364 container init c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.637426648 +0000 UTC m=+0.094090659 container start c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.638668068 +0000 UTC m=+0.095332100 container attach c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_curie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.56096933 +0000 UTC m=+0.017633361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:50:51 np0005591760 keen_curie[251296]: {
Jan 22 04:50:51 np0005591760 keen_curie[251296]:    "0": [
Jan 22 04:50:51 np0005591760 keen_curie[251296]:        {
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "devices": [
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "/dev/loop3"
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            ],
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "lv_name": "ceph_lv0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "lv_size": "21470642176",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "name": "ceph_lv0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "tags": {
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.cluster_name": "ceph",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.crush_device_class": "",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.encrypted": "0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.osd_id": "0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.type": "block",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.vdo": "0",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:                "ceph.with_tpm": "0"
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            },
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "type": "block",
Jan 22 04:50:51 np0005591760 keen_curie[251296]:            "vg_name": "ceph_vg0"
Jan 22 04:50:51 np0005591760 keen_curie[251296]:        }
Jan 22 04:50:51 np0005591760 keen_curie[251296]:    ]
Jan 22 04:50:51 np0005591760 keen_curie[251296]: }
Jan 22 04:50:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Jan 22 04:50:51 np0005591760 systemd[1]: libpod-c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf.scope: Deactivated successfully.
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.892988948 +0000 UTC m=+0.349652991 container died c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_curie, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:50:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ca1e72f24cea15292e8afabac35e733f26466771de15a00a67b1774280202082-merged.mount: Deactivated successfully.
Jan 22 04:50:51 np0005591760 podman[251283]: 2026-01-22 09:50:51.91286004 +0000 UTC m=+0.369524051 container remove c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_curie, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 04:50:51 np0005591760 systemd[1]: libpod-conmon-c1ac70c6c4b379db03a0766dd9bb2f07665dbe403b2ce23c267bcef82006badf.scope: Deactivated successfully.
Jan 22 04:50:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:52.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.344944894 +0000 UTC m=+0.030940186 container create 7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:50:52 np0005591760 systemd[1]: Started libpod-conmon-7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464.scope.
Jan 22 04:50:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.400010012 +0000 UTC m=+0.086005304 container init 7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.404789304 +0000 UTC m=+0.090784597 container start 7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_fermat, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:50:52 np0005591760 dreamy_fermat[251437]: 167 167
Jan 22 04:50:52 np0005591760 systemd[1]: libpod-7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464.scope: Deactivated successfully.
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.412263087 +0000 UTC m=+0.098258379 container attach 7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.412435773 +0000 UTC m=+0.098431065 container died 7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_fermat, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:50:52 np0005591760 systemd[1]: var-lib-containers-storage-overlay-00847bba88a7f7a3f08f97f344f07b06fb3268dc32ce3faf58934cd39d3973ed-merged.mount: Deactivated successfully.
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.33302689 +0000 UTC m=+0.019022202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:50:52 np0005591760 podman[251424]: 2026-01-22 09:50:52.437409897 +0000 UTC m=+0.123405189 container remove 7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:50:52 np0005591760 systemd[1]: libpod-conmon-7aba8b422569c034b871edd874524a4f98a3d3ae3057c3309fd9b932b5a83464.scope: Deactivated successfully.
Jan 22 04:50:52 np0005591760 podman[251460]: 2026-01-22 09:50:52.561812878 +0000 UTC m=+0.029369194 container create 20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:50:52 np0005591760 systemd[1]: Started libpod-conmon-20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac.scope.
Jan 22 04:50:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:50:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547fdd363c2cea081648b2cb827665d55e6d15007f6bad4d6b4befbc9ca8d330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547fdd363c2cea081648b2cb827665d55e6d15007f6bad4d6b4befbc9ca8d330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547fdd363c2cea081648b2cb827665d55e6d15007f6bad4d6b4befbc9ca8d330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547fdd363c2cea081648b2cb827665d55e6d15007f6bad4d6b4befbc9ca8d330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:50:52 np0005591760 podman[251460]: 2026-01-22 09:50:52.61402771 +0000 UTC m=+0.081584036 container init 20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:50:52 np0005591760 podman[251460]: 2026-01-22 09:50:52.619282199 +0000 UTC m=+0.086838505 container start 20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wiles, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:50:52 np0005591760 podman[251460]: 2026-01-22 09:50:52.620404334 +0000 UTC m=+0.087960641 container attach 20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wiles, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:50:52 np0005591760 podman[251460]: 2026-01-22 09:50:52.549007851 +0000 UTC m=+0.016564177 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:50:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:53 np0005591760 lvm[251549]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:50:53 np0005591760 lvm[251549]: VG ceph_vg0 finished
Jan 22 04:50:53 np0005591760 crazy_wiles[251473]: {}
Jan 22 04:50:53 np0005591760 systemd[1]: libpod-20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac.scope: Deactivated successfully.
Jan 22 04:50:53 np0005591760 podman[251460]: 2026-01-22 09:50:53.125128968 +0000 UTC m=+0.592685274 container died 20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wiles, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:50:53 np0005591760 systemd[1]: var-lib-containers-storage-overlay-547fdd363c2cea081648b2cb827665d55e6d15007f6bad4d6b4befbc9ca8d330-merged.mount: Deactivated successfully.
Jan 22 04:50:53 np0005591760 podman[251460]: 2026-01-22 09:50:53.150209041 +0000 UTC m=+0.617765348 container remove 20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wiles, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:50:53 np0005591760 systemd[1]: libpod-conmon-20473e5a2de7b64db800840068a175f92c8d5a1b89b189cad9ce7fd13caabdac.scope: Deactivated successfully.
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:53.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:53 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:50:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Jan 22 04:50:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Jan 22 04:50:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:50:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:50:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:57.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:57.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:57.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:57.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:57] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:50:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:50:57] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 22 04:50:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:57 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Jan 22 04:50:58 np0005591760 podman[251593]: 2026-01-22 09:50:58.052764223 +0000 UTC m=+0.043646837 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 04:50:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:50:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:50:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:58.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:58.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:58.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:50:58.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:50:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:50:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:50:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:50:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:50:59.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:50:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:50:59 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000d740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:50:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Jan 22 04:51:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:01 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:51:02 np0005591760 podman[251615]: 2026-01-22 09:51:02.064332194 +0000 UTC m=+0.057298770 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 04:51:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:02.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:03.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:03 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:51:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:05 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:51:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f698c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:07.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 22 04:51:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:07.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:07.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:07.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:07.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:07] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:51:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:07] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:51:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:07 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:51:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.709 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.710 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.750 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.750 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.750 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.751 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:51:08 np0005591760 nova_compute[248045]: 2026-01-22 09:51:08.751 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:51:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:08.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:08.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:08.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:51:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041103950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.104 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.296 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.297 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4919MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.297 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.298 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.408 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.409 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.530 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:51:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:09.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:09 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:51:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694150051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.874 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.878 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:51:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.897 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.898 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:51:09 np0005591760 nova_compute[248045]: 2026-01-22 09:51:09.898 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:51:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095109 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:51:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:10.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:10 np0005591760 nova_compute[248045]: 2026-01-22 09:51:10.488 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:10 np0005591760 nova_compute[248045]: 2026-01-22 09:51:10.489 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:51:10 np0005591760 nova_compute[248045]: 2026-01-22 09:51:10.489 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:51:10 np0005591760 nova_compute[248045]: 2026-01-22 09:51:10.505 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:51:10 np0005591760 nova_compute[248045]: 2026-01-22 09:51:10.505 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:51:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000db10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:11.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000db10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:51:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095112 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:51:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:13.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000db10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:51:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000db10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:15.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 22 04:51:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:17.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:17.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:17.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:17.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:17] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:51:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:17] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 22 04:51:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:17.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 22 04:51:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:18.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:18.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:18.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:51:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:19.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 22 04:51:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:20.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000db10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:21.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:51:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:51:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:51:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 22 04:51:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:22.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:51:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:23.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 22 04:51:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:51:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:25.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:51:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:25 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 22 04:51:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:51:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:27.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:27.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:27.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:27.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:27] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:51:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:27] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:51:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:27.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c007340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:51:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:28.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:28.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:29 np0005591760 podman[251737]: 2026-01-22 09:51:29.048500959 +0000 UTC m=+0.041298858 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 04:51:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:29.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:29 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 22 04:51:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095129 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:51:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=404 latency=0.001000011s ======
Jan 22 04:51:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:30.241 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.001000011s
Jan 22 04:51:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:51:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - - [22/Jan/2026:09:51:30.253 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000010s
Jan 22 04:51:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69840056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:31.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:31 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 22 04:51:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095132 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:51:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:33 np0005591760 podman[251782]: 2026-01-22 09:51:33.066296734 +0000 UTC m=+0.059691002 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.372699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075493372740, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 988, "num_deletes": 254, "total_data_size": 1653292, "memory_usage": 1685264, "flush_reason": "Manual Compaction"}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075493377666, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1611626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18340, "largest_seqno": 19327, "table_properties": {"data_size": 1606847, "index_size": 2303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10021, "raw_average_key_size": 18, "raw_value_size": 1597219, "raw_average_value_size": 2957, "num_data_blocks": 103, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075408, "oldest_key_time": 1769075408, "file_creation_time": 1769075493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4977 microseconds, and 3241 cpu microseconds.
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.377690) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1611626 bytes OK
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.377701) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.378022) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.378032) EVENT_LOG_v1 {"time_micros": 1769075493378029, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.378041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1648702, prev total WAL file size 1648702, number of live WAL files 2.
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.378426) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1573KB)], [38(12MB)]
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075493378448, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14218453, "oldest_snapshot_seqno": -1}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5159 keys, 13751427 bytes, temperature: kUnknown
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075493411538, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13751427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13715759, "index_size": 21670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 130614, "raw_average_key_size": 25, "raw_value_size": 13620999, "raw_average_value_size": 2640, "num_data_blocks": 893, "num_entries": 5159, "num_filter_entries": 5159, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.411683) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13751427 bytes
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.412113) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 429.1 rd, 415.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.0 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(17.4) write-amplify(8.5) OK, records in: 5681, records dropped: 522 output_compression: NoCompression
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.412127) EVENT_LOG_v1 {"time_micros": 1769075493412121, "job": 18, "event": "compaction_finished", "compaction_time_micros": 33134, "compaction_time_cpu_micros": 19290, "output_level": 6, "num_output_files": 1, "total_output_size": 13751427, "num_input_records": 5681, "num_output_records": 5159, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075493412425, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075493414070, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.378375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.414091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.414093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.414094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.414095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:51:33 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:51:33.414096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:51:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:33.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 22 04:51:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:51:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:34.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:51:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 22 04:51:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 22 04:51:34 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 22 04:51:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 22 04:51:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 22 04:51:35 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 22 04:51:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:35 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s
Jan 22 04:51:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:51:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:36.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:51:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 22 04:51:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 22 04:51:36 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 22 04:51:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:37.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:37.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:37.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:37.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:37] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:51:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:37] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:51:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:37.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:37 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Jan 22 04:51:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:51:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:51:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 22 04:51:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 22 04:51:38 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 22 04:51:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:38.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:39.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:39 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 185 B/s wr, 0 op/s
Jan 22 04:51:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:40.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0006710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:41.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:41 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 6.3 MiB/s wr, 58 op/s
Jan 22 04:51:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 22 04:51:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 22 04:51:43 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 22 04:51:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:43 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a00075a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 51 op/s
Jan 22 04:51:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:51:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:45.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:51:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 22 04:51:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a00075a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:47.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:47.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:47.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:47.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:51:47.307 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:51:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:51:47.307 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:51:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:51:47.308 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:51:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:47] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:51:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:47] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 22 04:51:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:47.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:47 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s
Jan 22 04:51:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:48.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69980052c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:48.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:48.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:48.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:48.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:51:49
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:51:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:49.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:49 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Jan 22 04:51:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:50.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:51.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:51 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 22 04:51:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:52.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:53.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 291 B/s rd, 0 op/s
Jan 22 04:51:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Jan 22 04:51:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:51:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:51:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:54.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.446582304 +0000 UTC m=+0.038389005 container create 735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:51:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:51:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:54 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:51:54 np0005591760 systemd[1]: Started libpod-conmon-735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e.scope.
Jan 22 04:51:54 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.512453586 +0000 UTC m=+0.104260288 container init 735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.518115377 +0000 UTC m=+0.109922079 container start 735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:51:54 np0005591760 trusting_nobel[252026]: 167 167
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.521969621 +0000 UTC m=+0.113776323 container attach 735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_nobel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:51:54 np0005591760 systemd[1]: libpod-735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e.scope: Deactivated successfully.
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.523242891 +0000 UTC m=+0.115049593 container died 735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.431349053 +0000 UTC m=+0.023155775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:51:54 np0005591760 systemd[1]: var-lib-containers-storage-overlay-21677bcb47d8f1c8e3eab640299810ac187d2484ebc27ee6153f3e21a12991b3-merged.mount: Deactivated successfully.
Jan 22 04:51:54 np0005591760 podman[252012]: 2026-01-22 09:51:54.543597655 +0000 UTC m=+0.135404357 container remove 735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:51:54 np0005591760 systemd[1]: libpod-conmon-735f666d4469d4eaebacf4c6db02a100d1ff7289ea735cb5e725514c522a902e.scope: Deactivated successfully.
Jan 22 04:51:54 np0005591760 podman[252048]: 2026-01-22 09:51:54.704027417 +0000 UTC m=+0.032598752 container create 21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:51:54 np0005591760 systemd[1]: Started libpod-conmon-21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2.scope.
Jan 22 04:51:54 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:51:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f950f158069b30af43ef1a7522620fe3b2545fa7cb978a194134b3088667237/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f950f158069b30af43ef1a7522620fe3b2545fa7cb978a194134b3088667237/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f950f158069b30af43ef1a7522620fe3b2545fa7cb978a194134b3088667237/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f950f158069b30af43ef1a7522620fe3b2545fa7cb978a194134b3088667237/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:54 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f950f158069b30af43ef1a7522620fe3b2545fa7cb978a194134b3088667237/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:54 np0005591760 podman[252048]: 2026-01-22 09:51:54.771365845 +0000 UTC m=+0.099937199 container init 21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 22 04:51:54 np0005591760 podman[252048]: 2026-01-22 09:51:54.77814396 +0000 UTC m=+0.106715294 container start 21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:51:54 np0005591760 podman[252048]: 2026-01-22 09:51:54.779541243 +0000 UTC m=+0.108112577 container attach 21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:51:54 np0005591760 podman[252048]: 2026-01-22 09:51:54.69113063 +0000 UTC m=+0.019701984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:51:54 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:51:54.899 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:51:54 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:51:54.901 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:51:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:55 np0005591760 unruffled_khayyam[252062]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:51:55 np0005591760 unruffled_khayyam[252062]: --> All data devices are unavailable
Jan 22 04:51:55 np0005591760 systemd[1]: libpod-21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2.scope: Deactivated successfully.
Jan 22 04:51:55 np0005591760 conmon[252062]: conmon 21470e5d0b9f1d3e9198 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2.scope/container/memory.events
Jan 22 04:51:55 np0005591760 podman[252048]: 2026-01-22 09:51:55.097710367 +0000 UTC m=+0.426281700 container died 21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:51:55 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8f950f158069b30af43ef1a7522620fe3b2545fa7cb978a194134b3088667237-merged.mount: Deactivated successfully.
Jan 22 04:51:55 np0005591760 podman[252048]: 2026-01-22 09:51:55.122366889 +0000 UTC m=+0.450938223 container remove 21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Jan 22 04:51:55 np0005591760 systemd[1]: libpod-conmon-21470e5d0b9f1d3e91985438a047b4f27e4585ebe84130360a45db51bd920cb2.scope: Deactivated successfully.
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.597833863 +0000 UTC m=+0.030393004 container create 7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 04:51:55 np0005591760 systemd[1]: Started libpod-conmon-7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a.scope.
Jan 22 04:51:55 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:51:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:55.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.657406513 +0000 UTC m=+0.089965655 container init 7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:51:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.663206504 +0000 UTC m=+0.095765647 container start 7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.664386018 +0000 UTC m=+0.096945160 container attach 7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:51:55 np0005591760 musing_ramanujan[252184]: 167 167
Jan 22 04:51:55 np0005591760 systemd[1]: libpod-7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a.scope: Deactivated successfully.
Jan 22 04:51:55 np0005591760 conmon[252184]: conmon 7c6fcbd749634a18cc9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a.scope/container/memory.events
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.668341412 +0000 UTC m=+0.100900554 container died 7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.585613269 +0000 UTC m=+0.018172431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:51:55 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a89641916b0c81d6e09e322128352c5889bd28a36f34807fe64016017360bcd2-merged.mount: Deactivated successfully.
Jan 22 04:51:55 np0005591760 podman[252170]: 2026-01-22 09:51:55.69237507 +0000 UTC m=+0.124934212 container remove 7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:51:55 np0005591760 systemd[1]: libpod-conmon-7c6fcbd749634a18cc9b5aabaecb0bdb7a28e9490f06b1d9ec8b8513282d3f8a.scope: Deactivated successfully.
Jan 22 04:51:55 np0005591760 podman[252207]: 2026-01-22 09:51:55.837017036 +0000 UTC m=+0.037245931 container create 45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:51:55 np0005591760 systemd[1]: Started libpod-conmon-45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f.scope.
Jan 22 04:51:55 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:51:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3b26bc2d387df5bd3a8ba939ccdaec9371e4c3a11352ea7e35d7e826489c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3b26bc2d387df5bd3a8ba939ccdaec9371e4c3a11352ea7e35d7e826489c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3b26bc2d387df5bd3a8ba939ccdaec9371e4c3a11352ea7e35d7e826489c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3b26bc2d387df5bd3a8ba939ccdaec9371e4c3a11352ea7e35d7e826489c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:55 np0005591760 podman[252207]: 2026-01-22 09:51:55.901016581 +0000 UTC m=+0.101245486 container init 45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:51:55 np0005591760 podman[252207]: 2026-01-22 09:51:55.906426387 +0000 UTC m=+0.106655282 container start 45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:51:55 np0005591760 podman[252207]: 2026-01-22 09:51:55.907966319 +0000 UTC m=+0.108195224 container attach 45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:51:55 np0005591760 podman[252207]: 2026-01-22 09:51:55.824181765 +0000 UTC m=+0.024410660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:51:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]: {
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:    "0": [
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:        {
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "devices": [
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "/dev/loop3"
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            ],
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "lv_name": "ceph_lv0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "lv_size": "21470642176",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "name": "ceph_lv0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "tags": {
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.cluster_name": "ceph",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.crush_device_class": "",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.encrypted": "0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.osd_id": "0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.type": "block",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.vdo": "0",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:                "ceph.with_tpm": "0"
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            },
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "type": "block",
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:            "vg_name": "ceph_vg0"
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:        }
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]:    ]
Jan 22 04:51:56 np0005591760 condescending_mirzakhani[252221]: }
Jan 22 04:51:56 np0005591760 systemd[1]: libpod-45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f.scope: Deactivated successfully.
Jan 22 04:51:56 np0005591760 podman[252207]: 2026-01-22 09:51:56.153991468 +0000 UTC m=+0.354220363 container died 45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:51:56 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fc3b26bc2d387df5bd3a8ba939ccdaec9371e4c3a11352ea7e35d7e826489c51-merged.mount: Deactivated successfully.
Jan 22 04:51:56 np0005591760 podman[252207]: 2026-01-22 09:51:56.182579059 +0000 UTC m=+0.382807944 container remove 45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:51:56 np0005591760 systemd[1]: libpod-conmon-45a2f73066c57049a1768fc1a8a9bc74589cb390b0bc00269034f95de9315d5f.scope: Deactivated successfully.
Jan 22 04:51:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:56.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0008430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.641577233 +0000 UTC m=+0.033828771 container create 6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:51:56 np0005591760 systemd[1]: Started libpod-conmon-6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d.scope.
Jan 22 04:51:56 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.699055887 +0000 UTC m=+0.091307445 container init 6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.704276736 +0000 UTC m=+0.096528275 container start 6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_jennings, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.705455749 +0000 UTC m=+0.097707307 container attach 6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:51:56 np0005591760 focused_jennings[252336]: 167 167
Jan 22 04:51:56 np0005591760 systemd[1]: libpod-6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d.scope: Deactivated successfully.
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.709270599 +0000 UTC m=+0.101522137 container died 6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.627316735 +0000 UTC m=+0.019568293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:51:56 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b8fb4613f1d5c1058ed817720a5629c0123e96bcdc1dca3fbc9334b376118c7d-merged.mount: Deactivated successfully.
Jan 22 04:51:56 np0005591760 podman[252322]: 2026-01-22 09:51:56.730650836 +0000 UTC m=+0.122902373 container remove 6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:51:56 np0005591760 systemd[1]: libpod-conmon-6c0508800c84c664b8a847b24030edab4399e43a68b2086b6fce8672238d100d.scope: Deactivated successfully.
Jan 22 04:51:56 np0005591760 podman[252358]: 2026-01-22 09:51:56.859381692 +0000 UTC m=+0.029743449 container create c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:51:56 np0005591760 systemd[1]: Started libpod-conmon-c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80.scope.
Jan 22 04:51:56 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:51:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bedf0c34806ed65069e1bbc657d11e48c07c4342568bce8b1c09313f5fb5398/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bedf0c34806ed65069e1bbc657d11e48c07c4342568bce8b1c09313f5fb5398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bedf0c34806ed65069e1bbc657d11e48c07c4342568bce8b1c09313f5fb5398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bedf0c34806ed65069e1bbc657d11e48c07c4342568bce8b1c09313f5fb5398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:51:56 np0005591760 podman[252358]: 2026-01-22 09:51:56.918547446 +0000 UTC m=+0.088909214 container init c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:51:56 np0005591760 podman[252358]: 2026-01-22 09:51:56.924390819 +0000 UTC m=+0.094752567 container start c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 04:51:56 np0005591760 podman[252358]: 2026-01-22 09:51:56.927596171 +0000 UTC m=+0.097957938 container attach c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 04:51:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:56 np0005591760 podman[252358]: 2026-01-22 09:51:56.848340192 +0000 UTC m=+0.018701949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:51:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:57.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:57.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:57.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:57.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:57 np0005591760 elastic_curran[252371]: {}
Jan 22 04:51:57 np0005591760 lvm[252448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:51:57 np0005591760 lvm[252448]: VG ceph_vg0 finished
Jan 22 04:51:57 np0005591760 systemd[1]: libpod-c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80.scope: Deactivated successfully.
Jan 22 04:51:57 np0005591760 podman[252358]: 2026-01-22 09:51:57.435248797 +0000 UTC m=+0.605610544 container died c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:51:57 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7bedf0c34806ed65069e1bbc657d11e48c07c4342568bce8b1c09313f5fb5398-merged.mount: Deactivated successfully.
Jan 22 04:51:57 np0005591760 podman[252358]: 2026-01-22 09:51:57.456735243 +0000 UTC m=+0.627096991 container remove c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:51:57 np0005591760 systemd[1]: libpod-conmon-c11724d3c29c2f03b723ab0f5cce5589d61a1814434622e287f2e6b5b5a6ff80.scope: Deactivated successfully.
Jan 22 04:51:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:51:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:51:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:57] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Jan 22 04:51:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:51:57] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Jan 22 04:51:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:57.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:57 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Jan 22 04:51:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:51:58.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:51:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:51:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:58.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:58.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:51:58.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:51:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:51:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:51:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:51:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:51:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:51:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:51:59 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:51:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Jan 22 04:52:00 np0005591760 podman[252487]: 2026-01-22 09:52:00.041898438 +0000 UTC m=+0.034996371 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 04:52:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:00.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:01 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Jan 22 04:52:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:03 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Jan 22 04:52:04 np0005591760 podman[252507]: 2026-01-22 09:52:04.07534169 +0000 UTC m=+0.059926267 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 04:52:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:04.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:04 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:04.903 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:05.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:05 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:52:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:06 np0005591760 nova_compute[248045]: 2026-01-22 09:52:06.312 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:07.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:07.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:07 np0005591760 nova_compute[248045]: 2026-01-22 09:52:07.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:07 np0005591760 nova_compute[248045]: 2026-01-22 09:52:07.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:07 np0005591760 nova_compute[248045]: 2026-01-22 09:52:07.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:52:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:07] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 22 04:52:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:07] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 22 04:52:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:07 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:52:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:08 np0005591760 nova_compute[248045]: 2026-01-22 09:52:08.296 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:08 np0005591760 nova_compute[248045]: 2026-01-22 09:52:08.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:08 np0005591760 nova_compute[248045]: 2026-01-22 09:52:08.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:08.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:08.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:08.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6998005be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.319 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.319 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.320 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.320 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.320 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:09.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:09 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.685 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.898 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.899 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4918MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.900 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.900 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.944 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.944 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:52:09 np0005591760 nova_compute[248045]: 2026-01-22 09:52:09.957 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:52:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:10.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:10 np0005591760 nova_compute[248045]: 2026-01-22 09:52:10.333 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:10 np0005591760 nova_compute[248045]: 2026-01-22 09:52:10.337 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:52:10 np0005591760 nova_compute[248045]: 2026-01-22 09:52:10.353 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:52:10 np0005591760 nova_compute[248045]: 2026-01-22 09:52:10.355 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:52:10 np0005591760 nova_compute[248045]: 2026-01-22 09:52:10.355 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:11.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 22 04:52:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:12.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.355 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.356 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.356 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.366 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.366 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.380 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.380 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.396 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.467 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.467 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.473 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.473 248049 INFO nova.compute.claims [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.547 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:52:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250314445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.898 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.902 248049 DEBUG nova.compute.provider_tree [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.914 248049 DEBUG nova.scheduler.client.report [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.928 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.929 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 04:52:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.963 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.963 248049 DEBUG nova.network.neutron [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.980 248049 INFO nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 04:52:12 np0005591760 nova_compute[248045]: 2026-01-22 09:52:12.994 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.075 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.076 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.076 248049 INFO nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Creating image(s)#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.099 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.120 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.139 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.144 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "9db187949728ea707722fd244d769f131efa8688" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.144 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "9db187949728ea707722fd244d769f131efa8688" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:13.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.934 248049 WARNING oslo_policy.policy [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.934 248049 WARNING oslo_policy.policy [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 22 04:52:13 np0005591760 nova_compute[248045]: 2026-01-22 09:52:13.936 248049 DEBUG nova.policy [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4428dd9b0fb64c25b8f33b0050d4ef6f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 04:52:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:52:14 np0005591760 nova_compute[248045]: 2026-01-22 09:52:14.169 248049 DEBUG nova.virt.libvirt.imagebackend [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image locations are: [{'url': 'rbd://43df7a30-cf5f-5209-adfd-bf44298b19f2/images/bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://43df7a30-cf5f-5209-adfd-bf44298b19f2/images/bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 22 04:52:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:14.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:14 np0005591760 nova_compute[248045]: 2026-01-22 09:52:14.910 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:14 np0005591760 nova_compute[248045]: 2026-01-22 09:52:14.957 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.part --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:14 np0005591760 nova_compute[248045]: 2026-01-22 09:52:14.958 248049 DEBUG nova.virt.images [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 22 04:52:14 np0005591760 nova_compute[248045]: 2026-01-22 09:52:14.959 248049 DEBUG nova.privsep.utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 22 04:52:14 np0005591760 nova_compute[248045]: 2026-01-22 09:52:14.960 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.part /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.023 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.part /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.converted" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.026 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.074 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688.converted --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.075 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "9db187949728ea707722fd244d769f131efa8688" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.093 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.095 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:15 np0005591760 nova_compute[248045]: 2026-01-22 09:52:15.430 248049 DEBUG nova.network.neutron [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Successfully created port: 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 04:52:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 22 04:52:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 22 04:52:15 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 22 04:52:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Jan 22 04:52:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.612836) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075536612993, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 737, "num_deletes": 251, "total_data_size": 1005717, "memory_usage": 1020200, "flush_reason": "Manual Compaction"}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075536618591, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 991822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19328, "largest_seqno": 20064, "table_properties": {"data_size": 988018, "index_size": 1584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8926, "raw_average_key_size": 19, "raw_value_size": 980122, "raw_average_value_size": 2168, "num_data_blocks": 70, "num_entries": 452, "num_filter_entries": 452, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075494, "oldest_key_time": 1769075494, "file_creation_time": 1769075536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5757 microseconds, and 3498 cpu microseconds.
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.618616) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 991822 bytes OK
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.618635) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.619017) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.619026) EVENT_LOG_v1 {"time_micros": 1769075536619023, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.619038) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1001951, prev total WAL file size 1001951, number of live WAL files 2.
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.619377) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(968KB)], [41(13MB)]
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075536619405, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 14743249, "oldest_snapshot_seqno": -1}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5091 keys, 12567756 bytes, temperature: kUnknown
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075536648832, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12567756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12533505, "index_size": 20445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 129920, "raw_average_key_size": 25, "raw_value_size": 12440717, "raw_average_value_size": 2443, "num_data_blocks": 838, "num_entries": 5091, "num_filter_entries": 5091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.649140) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12567756 bytes
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.649612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 498.0 rd, 424.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(27.5) write-amplify(12.7) OK, records in: 5611, records dropped: 520 output_compression: NoCompression
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.649627) EVENT_LOG_v1 {"time_micros": 1769075536649620, "job": 20, "event": "compaction_finished", "compaction_time_micros": 29603, "compaction_time_cpu_micros": 20420, "output_level": 6, "num_output_files": 1, "total_output_size": 12567756, "num_input_records": 5611, "num_output_records": 5091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075536649919, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075536652453, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.619343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.652490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.652494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.652496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.652497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:52:16 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:52:16.652498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.724 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.781 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] resizing rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.857 248049 DEBUG nova.objects.instance [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'migration_context' on Instance uuid db3f9d63-cffc-4b71-b42f-7bd2d9e41955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.872 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.873 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Ensure instance console log exists: /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.874 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.874 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.874 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.975 248049 DEBUG nova.network.neutron [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Successfully updated port: 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.988 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.989 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquired lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:52:16 np0005591760 nova_compute[248045]: 2026-01-22 09:52:16.989 248049 DEBUG nova.network.neutron [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 04:52:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:17.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:17.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:17 np0005591760 nova_compute[248045]: 2026-01-22 09:52:17.414 248049 DEBUG nova.compute.manager [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-changed-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:52:17 np0005591760 nova_compute[248045]: 2026-01-22 09:52:17.415 248049 DEBUG nova.compute.manager [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Refreshing instance network info cache due to event network-changed-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:52:17 np0005591760 nova_compute[248045]: 2026-01-22 09:52:17.415 248049 DEBUG oslo_concurrency.lockutils [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:52:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:17] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 22 04:52:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:17] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 22 04:52:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:17.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:17 np0005591760 nova_compute[248045]: 2026-01-22 09:52:17.855 248049 DEBUG nova.network.neutron [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 04:52:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 22 04:52:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:52:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4379 writes, 19K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 4379 writes, 4379 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1533 writes, 6497 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 11.24 MB, 0.02 MB/s#012Interval WAL: 1533 writes, 1533 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    381.7      0.09              0.06        10    0.009       0      0       0.0       0.0#012  L6      1/0   11.99 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    469.3    393.8      0.29              0.19         9    0.032     43K   4935       0.0       0.0#012 Sum      1/0   11.99 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    359.2    391.0      0.38              0.25        19    0.020     43K   4935       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.7    388.1    390.4      0.16              0.10         8    0.020     23K   2546       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    469.3    393.8      0.29              0.19         9    0.032     43K   4935       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    386.9      0.09              0.06         9    0.010       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.4 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d6a5b429b0#2 capacity: 304.00 MB usage: 8.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(506,8.13 MB,2.6752%) FilterBlock(20,125.73 KB,0.0403906%) IndexBlock(20,247.61 KB,0.0795415%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 04:52:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.492 248049 DEBUG nova.network.neutron [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updating instance_info_cache with network_info: [{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.507 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Releasing lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.507 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Instance network_info: |[{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.508 248049 DEBUG oslo_concurrency.lockutils [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.508 248049 DEBUG nova.network.neutron [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Refreshing network info cache for port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.510 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Start _get_guest_xml network_info=[{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T09:51:33Z,direct_url=<?>,disk_format='qcow2',id=bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a894ac5b4f744f208fa506d5e8f67970',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T09:51:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'encryption_format': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'encryption_options': None, 'image_id': 'bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.514 248049 WARNING nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.520 248049 DEBUG nova.virt.libvirt.host [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.521 248049 DEBUG nova.virt.libvirt.host [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.523 248049 DEBUG nova.virt.libvirt.host [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.524 248049 DEBUG nova.virt.libvirt.host [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.524 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.524 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T09:51:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6eff66ba-fb3e-4ca7-b05b-920b01d9affd',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T09:51:33Z,direct_url=<?>,disk_format='qcow2',id=bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a894ac5b4f744f208fa506d5e8f67970',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T09:51:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.525 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.525 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.525 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.525 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.525 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.525 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.526 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.526 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.526 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.526 248049 DEBUG nova.virt.hardware [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.530 248049 DEBUG nova.privsep.utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.530 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:18.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:18.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:18.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:18.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.879 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.900 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:18 np0005591760 nova_compute[248045]: 2026-01-22 09:52:18.903 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:52:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/920193017' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.255 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.257 248049 DEBUG nova.virt.libvirt.vif [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T09:52:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-751480636',display_name='tempest-TestNetworkBasicOps-server-751480636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-751480636',id=1,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILiBlL+TvH2gMCwjifyc/7Gm6jqtzdZz82DpKlqMzB6gNMcB2nyBl3WjXUnQtbtT+iCTUq5H4q1VSudwoC9f3p38sE8mJ1gg5Tmnybu9QHQy5PAl4rj4HJKymnAARdITw==',key_name='tempest-TestNetworkBasicOps-1904070563',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-7yqiqcub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T09:52:13Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=db3f9d63-cffc-4b71-b42f-7bd2d9e41955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.257 248049 DEBUG nova.network.os_vif_util [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.258 248049 DEBUG nova.network.os_vif_util [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.260 248049 DEBUG nova.objects.instance [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid db3f9d63-cffc-4b71-b42f-7bd2d9e41955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.281 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] End _get_guest_xml xml=<domain type="kvm">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <uuid>db3f9d63-cffc-4b71-b42f-7bd2d9e41955</uuid>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <name>instance-00000001</name>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <memory>131072</memory>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <vcpu>1</vcpu>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:name>tempest-TestNetworkBasicOps-server-751480636</nova:name>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:creationTime>2026-01-22 09:52:18</nova:creationTime>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:flavor name="m1.nano">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:memory>128</nova:memory>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:disk>1</nova:disk>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:swap>0</nova:swap>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:vcpus>1</nova:vcpus>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </nova:flavor>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:owner>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </nova:owner>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <nova:ports>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <nova:port uuid="3ae9c689-1b36-4b9f-b18d-ee8cb51409c6">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        </nova:port>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </nova:ports>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </nova:instance>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <sysinfo type="smbios">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <entry name="manufacturer">RDO</entry>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <entry name="product">OpenStack Compute</entry>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <entry name="serial">db3f9d63-cffc-4b71-b42f-7bd2d9e41955</entry>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <entry name="uuid">db3f9d63-cffc-4b71-b42f-7bd2d9e41955</entry>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <entry name="family">Virtual Machine</entry>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <boot dev="hd"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <smbios mode="sysinfo"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <vmcoreinfo/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <clock offset="utc">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <timer name="hpet" present="no"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <cpu mode="host-model" match="exact">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <disk type="network" device="disk">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <driver type="raw" cache="none"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <source protocol="rbd" name="vms/db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <host name="192.168.122.100" port="6789"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <host name="192.168.122.102" port="6789"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <host name="192.168.122.101" port="6789"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <auth username="openstack">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <secret type="ceph" uuid="43df7a30-cf5f-5209-adfd-bf44298b19f2"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <target dev="vda" bus="virtio"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <disk type="network" device="cdrom">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <driver type="raw" cache="none"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <source protocol="rbd" name="vms/db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk.config">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <host name="192.168.122.100" port="6789"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <host name="192.168.122.102" port="6789"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <host name="192.168.122.101" port="6789"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <auth username="openstack">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:        <secret type="ceph" uuid="43df7a30-cf5f-5209-adfd-bf44298b19f2"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <target dev="sda" bus="sata"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <interface type="ethernet">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <mac address="fa:16:3e:31:7b:30"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <model type="virtio"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <mtu size="1442"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <target dev="tap3ae9c689-1b"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <serial type="pty">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <log file="/var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/console.log" append="off"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <model type="virtio"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <input type="tablet" bus="usb"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <rng model="virtio">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <backend model="random">/dev/urandom</backend>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <controller type="usb" index="0"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    <memballoon model="virtio">
Jan 22 04:52:19 np0005591760 nova_compute[248045]:      <stats period="10"/>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:52:19 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:52:19 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:52:19 np0005591760 nova_compute[248045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.282 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Preparing to wait for external event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.282 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.282 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.282 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.283 248049 DEBUG nova.virt.libvirt.vif [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T09:52:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-751480636',display_name='tempest-TestNetworkBasicOps-server-751480636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-751480636',id=1,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILiBlL+TvH2gMCwjifyc/7Gm6jqtzdZz82DpKlqMzB6gNMcB2nyBl3WjXUnQtbtT+iCTUq5H4q1VSudwoC9f3p38sE8mJ1gg5Tmnybu9QHQy5PAl4rj4HJKymnAARdITw==',key_name='tempest-TestNetworkBasicOps-1904070563',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-7yqiqcub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T09:52:13Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=db3f9d63-cffc-4b71-b42f-7bd2d9e41955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.283 248049 DEBUG nova.network.os_vif_util [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.284 248049 DEBUG nova.network.os_vif_util [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.284 248049 DEBUG os_vif [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.311 248049 DEBUG ovsdbapp.backend.ovs_idl [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.311 248049 DEBUG ovsdbapp.backend.ovs_idl [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.311 248049 DEBUG ovsdbapp.backend.ovs_idl [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.312 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.312 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.312 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.313 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.314 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.315 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.323 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.323 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.324 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.325 248049 INFO oslo.privsep.daemon [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp_wsv90hs/privsep.sock']#033[00m
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:52:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.867 248049 INFO oslo.privsep.daemon [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.790 252883 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.793 252883 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.795 252883 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.795 252883 INFO oslo.privsep.daemon [-] privsep daemon running as pid 252883#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.870 248049 DEBUG nova.network.neutron [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updated VIF entry in instance network info cache for port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.871 248049 DEBUG nova.network.neutron [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updating instance_info_cache with network_info: [{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:52:19 np0005591760 nova_compute[248045]: 2026-01-22 09:52:19.892 248049 DEBUG oslo_concurrency.lockutils [req-cdaf5164-6d99-4577-9ea2-28364911d721 req-e022d478-adac-490f-80d8-a8080fb93d36 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:52:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.124 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.124 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ae9c689-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.125 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ae9c689-1b, col_values=(('external_ids', {'iface-id': '3ae9c689-1b36-4b9f-b18d-ee8cb51409c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:7b:30', 'vm-uuid': 'db3f9d63-cffc-4b71-b42f-7bd2d9e41955'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.126 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:20 np0005591760 NetworkManager[48920]: <info>  [1769075540.1276] manager: (tap3ae9c689-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.130 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.132 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.132 248049 INFO os_vif [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b')#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.169 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.169 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.170 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No VIF found with MAC fa:16:3e:31:7b:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.170 248049 INFO nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Using config drive#033[00m
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.190 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:20.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:20 np0005591760 nova_compute[248045]: 2026-01-22 09:52:20.272 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.159 248049 INFO nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Creating config drive at /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/disk.config#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.163 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxizzb6lm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.284 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxizzb6lm" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.305 248049 DEBUG nova.storage.rbd_utils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.307 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/disk.config db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.386 248049 DEBUG oslo_concurrency.processutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/disk.config db3f9d63-cffc-4b71-b42f-7bd2d9e41955_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.387 248049 INFO nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Deleting local config drive /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955/disk.config because it was imported into RBD.#033[00m
Jan 22 04:52:21 np0005591760 systemd[1]: Starting libvirt secret daemon...
Jan 22 04:52:21 np0005591760 systemd[1]: Started libvirt secret daemon.
Jan 22 04:52:21 np0005591760 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 22 04:52:21 np0005591760 kernel: tap3ae9c689-1b: entered promiscuous mode
Jan 22 04:52:21 np0005591760 NetworkManager[48920]: <info>  [1769075541.4636] manager: (tap3ae9c689-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 22 04:52:21 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:21Z|00027|binding|INFO|Claiming lport 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 for this chassis.
Jan 22 04:52:21 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:21Z|00028|binding|INFO|3ae9c689-1b36-4b9f-b18d-ee8cb51409c6: Claiming fa:16:3e:31:7b:30 10.100.0.14
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.468 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:21 np0005591760 systemd-udevd[252982]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:52:21 np0005591760 NetworkManager[48920]: <info>  [1769075541.5009] device (tap3ae9c689-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:52:21 np0005591760 NetworkManager[48920]: <info>  [1769075541.5013] device (tap3ae9c689-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 04:52:21 np0005591760 systemd-machined[216371]: New machine qemu-1-instance-00000001.
Jan 22 04:52:21 np0005591760 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.560 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:21 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:21.564 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:7b:30 10.100.0.14'], port_security=['fa:16:3e:31:7b:30 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'db3f9d63-cffc-4b71-b42f-7bd2d9e41955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6967a5cb-6cc9-4914-adb9-bd594f436add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51ac6686-d036-4e59-bb31-55b907c04e7d, chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:52:21 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:21.565 164103 INFO neutron.agent.ovn.metadata.agent [-] Port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 in datapath b81a32f4-9ca3-4bd8-ba9d-4dddb997108a bound to our chassis#033[00m
Jan 22 04:52:21 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:21.567 164103 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b81a32f4-9ca3-4bd8-ba9d-4dddb997108a#033[00m
Jan 22 04:52:21 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:21.568 164103 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_jdobvqk/privsep.sock']#033[00m
Jan 22 04:52:21 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:21Z|00029|binding|INFO|Setting lport 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 ovn-installed in OVS
Jan 22 04:52:21 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:21Z|00030|binding|INFO|Setting lport 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 up in Southbound
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.576 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:21.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.890 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075541.8903751, db3f9d63-cffc-4b71-b42f-7bd2d9e41955 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.891 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] VM Started (Lifecycle Event)#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.941 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.943 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075541.8914206, db3f9d63-cffc-4b71-b42f-7bd2d9e41955 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.944 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] VM Paused (Lifecycle Event)#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.959 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.961 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 04:52:21 np0005591760 nova_compute[248045]: 2026-01-22 09:52:21.977 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 04:52:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.152 164103 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.154 164103 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_jdobvqk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.066 253045 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.069 253045 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.071 253045 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.071 253045 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253045#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.157 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[7275da99-f72a-4056-a1a7-0b0ed7815ddc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.220 248049 DEBUG nova.compute.manager [req-1297cbfc-49a2-4bb6-9cd8-b9973d54e37f req-a4b4f7a7-5876-42c1-a7b6-777950b01590 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.221 248049 DEBUG oslo_concurrency.lockutils [req-1297cbfc-49a2-4bb6-9cd8-b9973d54e37f req-a4b4f7a7-5876-42c1-a7b6-777950b01590 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.221 248049 DEBUG oslo_concurrency.lockutils [req-1297cbfc-49a2-4bb6-9cd8-b9973d54e37f req-a4b4f7a7-5876-42c1-a7b6-777950b01590 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.221 248049 DEBUG oslo_concurrency.lockutils [req-1297cbfc-49a2-4bb6-9cd8-b9973d54e37f req-a4b4f7a7-5876-42c1-a7b6-777950b01590 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.222 248049 DEBUG nova.compute.manager [req-1297cbfc-49a2-4bb6-9cd8-b9973d54e37f req-a4b4f7a7-5876-42c1-a7b6-777950b01590 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Processing event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.222 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.230 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.231 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075542.2308066, db3f9d63-cffc-4b71-b42f-7bd2d9e41955 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.231 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] VM Resumed (Lifecycle Event)#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.234 248049 INFO nova.virt.libvirt.driver [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Instance spawned successfully.#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.234 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.253 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.257 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 04:52:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.281 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.286 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.287 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.288 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.288 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.289 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.290 248049 DEBUG nova.virt.libvirt.driver [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.357 248049 INFO nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Took 9.28 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.358 248049 DEBUG nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:52:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.414 248049 INFO nova.compute.manager [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Took 9.96 seconds to build instance.#033[00m
Jan 22 04:52:22 np0005591760 nova_compute[248045]: 2026-01-22 09:52:22.430 248049 DEBUG oslo_concurrency.lockutils [None req-a5c0a186-0cdb-48be-936c-60bd52c3195a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.715 253045 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.715 253045 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:22 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:22.715 253045 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.271 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[17117ee5-fdff-4328-82b3-6c265aac1ddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.285 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb81a32f4-91 in ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.286 253045 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb81a32f4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.287 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[d069cbfe-b86c-4be0-a057-c420a557d0ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.290 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[d95234e2-1e23-4369-80be-c7aba7320197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.308 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[59bf6d6c-87ef-4fe0-9c2a-4e238ade8451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.323 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2f4f46-869d-480c-bb74-8d65af5f5e4b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.324 164103 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpo8a_8zch/privsep.sock']#033[00m
Jan 22 04:52:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 22 04:52:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 22 04:52:23 np0005591760 ceph-mon[74254]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 22 04:52:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:23.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.916 164103 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.917 164103 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpo8a_8zch/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.825 253060 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.834 253060 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.835 253060 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.836 253060 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253060#033[00m
Jan 22 04:52:23 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:23.919 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[1a257539-69bd-4cd4-b30b-c5d1dea08623]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Jan 22 04:52:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:24.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:24 np0005591760 nova_compute[248045]: 2026-01-22 09:52:24.272 248049 DEBUG nova.compute.manager [req-f2bded88-bef6-4422-a78b-9bdace4e4537 req-d0635ea4-4b15-480b-aa21-f91fcbc14948 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:52:24 np0005591760 nova_compute[248045]: 2026-01-22 09:52:24.272 248049 DEBUG oslo_concurrency.lockutils [req-f2bded88-bef6-4422-a78b-9bdace4e4537 req-d0635ea4-4b15-480b-aa21-f91fcbc14948 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:24 np0005591760 nova_compute[248045]: 2026-01-22 09:52:24.272 248049 DEBUG oslo_concurrency.lockutils [req-f2bded88-bef6-4422-a78b-9bdace4e4537 req-d0635ea4-4b15-480b-aa21-f91fcbc14948 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:24 np0005591760 nova_compute[248045]: 2026-01-22 09:52:24.272 248049 DEBUG oslo_concurrency.lockutils [req-f2bded88-bef6-4422-a78b-9bdace4e4537 req-d0635ea4-4b15-480b-aa21-f91fcbc14948 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:24 np0005591760 nova_compute[248045]: 2026-01-22 09:52:24.273 248049 DEBUG nova.compute.manager [req-f2bded88-bef6-4422-a78b-9bdace4e4537 req-d0635ea4-4b15-480b-aa21-f91fcbc14948 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] No waiting events found dispatching network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:52:24 np0005591760 nova_compute[248045]: 2026-01-22 09:52:24.273 248049 WARNING nova.compute.manager [req-f2bded88-bef6-4422-a78b-9bdace4e4537 req-d0635ea4-4b15-480b-aa21-f91fcbc14948 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received unexpected event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.339 253060 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.339 253060 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.339 253060 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a0009530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.836 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a2947f-d355-4ccb-83db-80ef2073ddb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.853 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7b06e0-7ded-4620-87c2-b6df07733a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 NetworkManager[48920]: <info>  [1769075544.8543] manager: (tapb81a32f4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 22 04:52:24 np0005591760 systemd-udevd[253074]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.875 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1f1d37-2a7c-42fb-a1be-a7d56bc71126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.880 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[fe06daea-5118-43b3-aa4b-c6c6d11c5908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 NetworkManager[48920]: <info>  [1769075544.9006] device (tapb81a32f4-90): carrier: link connected
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.904 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[09aa59a3-013d-44a0-b9d1-79cb46f893dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.920 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4d6da4-f9cd-4aca-9054-2fbef92085c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb81a32f4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:ab:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 312427, 'reachable_time': 35693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253084, 'error': None, 'target': 'ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.934 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[9669defe-8f4a-4184-aff6-c3468eae5350]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:abd1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 312427, 'tstamp': 312427}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253086, 'error': None, 'target': 'ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.946 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7d23c5-8ab5-41be-9b1e-b11e2dd0b40c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb81a32f4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:ab:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 312427, 'reachable_time': 35693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253087, 'error': None, 'target': 'ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:24.967 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fb45dc-1f6a-48bf-b8dc-cd52d09808bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.017 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[a5210539-3c85-4524-be70-1c79673f111b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.019 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb81a32f4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.019 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.019 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb81a32f4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:25 np0005591760 kernel: tapb81a32f4-90: entered promiscuous mode
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0217] manager: (tapb81a32f4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.021 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.023 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.024 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb81a32f4-90, col_values=(('external_ids', {'iface-id': '73a4c783-9ced-4308-bd61-5ef603c35996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:25 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:25Z|00031|binding|INFO|Releasing lport 73a4c783-9ced-4308-bd61-5ef603c35996 from this chassis (sb_readonly=0)
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.025 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.026 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.027 164103 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b81a32f4-9ca3-4bd8-ba9d-4dddb997108a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b81a32f4-9ca3-4bd8-ba9d-4dddb997108a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.027 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[a725aeef-39d6-4c53-89b1-38e372311a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.028 164103 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: global
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    log         /dev/log local0 debug
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    log-tag     haproxy-metadata-proxy-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    user        root
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    group       root
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    maxconn     1024
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    pidfile     /var/lib/neutron/external/pids/b81a32f4-9ca3-4bd8-ba9d-4dddb997108a.pid.haproxy
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    daemon
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: defaults
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    log global
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    mode http
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    option httplog
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    option dontlognull
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    option http-server-close
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    option forwardfor
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    retries                 3
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    timeout http-request    30s
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    timeout connect         30s
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    timeout client          32s
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    timeout server          32s
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    timeout http-keep-alive 30s
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: listen listener
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    bind 169.254.169.254:80
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]:    http-request add-header X-OVN-Network-ID b81a32f4-9ca3-4bd8-ba9d-4dddb997108a
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 04:52:25 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:25.028 164103 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'env', 'PROCESS_TAG=haproxy-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b81a32f4-9ca3-4bd8-ba9d-4dddb997108a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.043 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:25Z|00032|binding|INFO|Releasing lport 73a4c783-9ced-4308-bd61-5ef603c35996 from this chassis (sb_readonly=0)
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.056 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0571] manager: (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0574] device (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <warn>  [1769075545.0575] device (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0582] manager: (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0584] device (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <warn>  [1769075545.0585] device (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0594] manager: (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0600] manager: (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0604] device (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 04:52:25 np0005591760 NetworkManager[48920]: <info>  [1769075545.0608] device (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 04:52:25 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:25Z|00033|binding|INFO|Releasing lport 73a4c783-9ced-4308-bd61-5ef603c35996 from this chassis (sb_readonly=0)
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.104 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.107 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.127 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 nova_compute[248045]: 2026-01-22 09:52:25.273 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:25 np0005591760 podman[253116]: 2026-01-22 09:52:25.341659563 +0000 UTC m=+0.032355643 container create 3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 04:52:25 np0005591760 systemd[1]: Started libpod-conmon-3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b.scope.
Jan 22 04:52:25 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:52:25 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b93516223b5b4cefffc25768997c6b84c51f68b6c0185a6589bc09fb1575504/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 04:52:25 np0005591760 podman[253116]: 2026-01-22 09:52:25.415476611 +0000 UTC m=+0.106172682 container init 3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 04:52:25 np0005591760 podman[253116]: 2026-01-22 09:52:25.421400477 +0000 UTC m=+0.112096547 container start 3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:52:25 np0005591760 podman[253116]: 2026-01-22 09:52:25.326538935 +0000 UTC m=+0.017235025 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:52:25 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [NOTICE]   (253132) : New worker (253134) forked
Jan 22 04:52:25 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [NOTICE]   (253132) : Loading success.
Jan 22 04:52:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:25.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:25 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 129 op/s
Jan 22 04:52:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:26 np0005591760 nova_compute[248045]: 2026-01-22 09:52:26.340 248049 DEBUG nova.compute.manager [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-changed-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:52:26 np0005591760 nova_compute[248045]: 2026-01-22 09:52:26.340 248049 DEBUG nova.compute.manager [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Refreshing instance network info cache due to event network-changed-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:52:26 np0005591760 nova_compute[248045]: 2026-01-22 09:52:26.341 248049 DEBUG oslo_concurrency.lockutils [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:52:26 np0005591760 nova_compute[248045]: 2026-01-22 09:52:26.341 248049 DEBUG oslo_concurrency.lockutils [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:52:26 np0005591760 nova_compute[248045]: 2026-01-22 09:52:26.341 248049 DEBUG nova.network.neutron [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Refreshing network info cache for port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:52:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:26 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a00096d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:27.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:27.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:27.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:27] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 22 04:52:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:27] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 22 04:52:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:27.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:27 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:27 np0005591760 nova_compute[248045]: 2026-01-22 09:52:27.867 248049 DEBUG nova.network.neutron [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updated VIF entry in instance network info cache for port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:52:27 np0005591760 nova_compute[248045]: 2026-01-22 09:52:27.868 248049 DEBUG nova.network.neutron [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updating instance_info_cache with network_info: [{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:52:27 np0005591760 nova_compute[248045]: 2026-01-22 09:52:27.883 248049 DEBUG oslo_concurrency.lockutils [req-fa93f17f-379c-4caa-b845-f0b8b25fb674 req-0174342f-5a13-433d-bbee-f3a268ee27d6 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:52:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Jan 22 04:52:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:28.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:28.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:28.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:28.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:28.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:28 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:29.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:29 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a00096d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Jan 22 04:52:30 np0005591760 nova_compute[248045]: 2026-01-22 09:52:30.127 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:30.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:30 np0005591760 nova_compute[248045]: 2026-01-22 09:52:30.274 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:30 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:31 np0005591760 podman[253145]: 2026-01-22 09:52:31.053312976 +0000 UTC m=+0.043054318 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 04:52:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:31.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:31 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 22 04:52:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:32.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a00096d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:32 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:33.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:33 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:33 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:33Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:7b:30 10.100.0.14
Jan 22 04:52:33 np0005591760 ovn_controller[154073]: 2026-01-22T09:52:33Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:7b:30 10.100.0.14
Jan 22 04:52:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 83 op/s
Jan 22 04:52:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697000e350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:34 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:35 np0005591760 podman[253192]: 2026-01-22 09:52:35.094257907 +0000 UTC m=+0.082117283 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:52:35 np0005591760 nova_compute[248045]: 2026-01-22 09:52:35.128 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:35 np0005591760 nova_compute[248045]: 2026-01-22 09:52:35.275 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:35.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:35 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 22 04:52:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:36 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:37.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:37.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:37.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:37.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 04:52:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 04:52:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:37 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c002670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:52:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:37.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:52:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 22 04:52:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a8004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:38.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:38 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:39 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:39.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:39 np0005591760 nova_compute[248045]: 2026-01-22 09:52:39.765 248049 INFO nova.compute.manager [None req-488e581f-10bb-474d-b743-0f93a16594c2 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Get console output#033[00m
Jan 22 04:52:39 np0005591760 nova_compute[248045]: 2026-01-22 09:52:39.770 248049 INFO oslo.privsep.daemon [None req-488e581f-10bb-474d-b743-0f93a16594c2 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpc43r5nqq/privsep.sock']#033[00m
Jan 22 04:52:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.130 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.279 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.338 248049 INFO oslo.privsep.daemon [None req-488e581f-10bb-474d-b743-0f93a16594c2 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.250 253225 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.253 253225 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.254 253225 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.254 253225 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253225#033[00m
Jan 22 04:52:40 np0005591760 nova_compute[248045]: 2026-01-22 09:52:40.416 253225 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 22 04:52:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c0031b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:40 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a80052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:41 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:41.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 22 04:52:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:42.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095242 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:52:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:42 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c0031b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:43 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a80052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:43.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095243 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:52:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 22 04:52:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:52:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:52:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:44 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:45 np0005591760 nova_compute[248045]: 2026-01-22 09:52:45.131 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:45 np0005591760 nova_compute[248045]: 2026-01-22 09:52:45.280 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:45 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c0031b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:45.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 22 04:52:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a80052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:46 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:47.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:47.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:47.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:47.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:47.309 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:52:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:47.309 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:52:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:47.310 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:52:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 04:52:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 04:52:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:47 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:47.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 17 KiB/s wr, 1 op/s
Jan 22 04:52:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:48.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:48.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:48.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:48 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a8006710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:52:49
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.nfs', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr']
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:52:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:49 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:49.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 17 KiB/s wr, 1 op/s
Jan 22 04:52:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:52:50 np0005591760 nova_compute[248045]: 2026-01-22 09:52:50.133 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:50 np0005591760 nova_compute[248045]: 2026-01-22 09:52:50.281 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:50.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:50 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 04:52:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1427732053' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 04:52:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 04:52:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1427732053' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 04:52:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:51 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a8006710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:51.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 17 KiB/s wr, 5 op/s
Jan 22 04:52:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:52.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:52 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:52:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:52:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:52:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:53 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 5.8 KiB/s wr, 5 op/s
Jan 22 04:52:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a8006710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:54 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:55 np0005591760 nova_compute[248045]: 2026-01-22 09:52:55.134 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:55 np0005591760 nova_compute[248045]: 2026-01-22 09:52:55.283 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:55 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:52:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 04:52:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:56.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:56 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a8007810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:57.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:57.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:57.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:57.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:57] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:52:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:52:57] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:52:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:57 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:57.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 22 04:52:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:52:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:52:58.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.687974658 +0000 UTC m=+0.030800222 container create ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:52:58 np0005591760 systemd[1]: Started libpod-conmon-ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73.scope.
Jan 22 04:52:58 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.752642963 +0000 UTC m=+0.095468547 container init ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.758849922 +0000 UTC m=+0.101675486 container start ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ritchie, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 04:52:58 np0005591760 flamboyant_ritchie[253445]: 167 167
Jan 22 04:52:58 np0005591760 systemd[1]: libpod-ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73.scope: Deactivated successfully.
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.763075125 +0000 UTC m=+0.105900699 container attach ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 04:52:58 np0005591760 conmon[253445]: conmon ad64af8e8e227586b3e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73.scope/container/memory.events
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.764251593 +0000 UTC m=+0.107077157 container died ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.675896394 +0000 UTC m=+0.018721979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:52:58 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:52:58 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7025dc6fcc63f7d041ea920a308bfc9895d0874dc93543ecd9d478a57d5785a3-merged.mount: Deactivated successfully.
Jan 22 04:52:58 np0005591760 podman[253432]: 2026-01-22 09:52:58.788276053 +0000 UTC m=+0.131101616 container remove ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_ritchie, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:52:58 np0005591760 systemd[1]: libpod-conmon-ad64af8e8e227586b3e6e7ffb1433c1dd1de2e13594d7ca545429a301d802d73.scope: Deactivated successfully.
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:58.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:58.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:52:58.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:52:58 np0005591760 podman[253468]: 2026-01-22 09:52:58.927141081 +0000 UTC m=+0.032587250 container create 359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:52:58 np0005591760 systemd[1]: Started libpod-conmon-359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2.scope.
Jan 22 04:52:58 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:52:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c5391589dc8a0b3e346dd4f2a206a96cc5f7cb82cc758f6876726ed0974e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:52:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c5391589dc8a0b3e346dd4f2a206a96cc5f7cb82cc758f6876726ed0974e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:52:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c5391589dc8a0b3e346dd4f2a206a96cc5f7cb82cc758f6876726ed0974e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:52:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c5391589dc8a0b3e346dd4f2a206a96cc5f7cb82cc758f6876726ed0974e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:52:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:58 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4002630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:58 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c5391589dc8a0b3e346dd4f2a206a96cc5f7cb82cc758f6876726ed0974e2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:52:58 np0005591760 podman[253468]: 2026-01-22 09:52:58.987038233 +0000 UTC m=+0.092484412 container init 359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:52:58 np0005591760 podman[253468]: 2026-01-22 09:52:58.993566968 +0000 UTC m=+0.099013137 container start 359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:52:58 np0005591760 podman[253468]: 2026-01-22 09:52:58.997576956 +0000 UTC m=+0.103023145 container attach 359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:52:59 np0005591760 podman[253468]: 2026-01-22 09:52:58.914272837 +0000 UTC m=+0.019719027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:52:59 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:59.189 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:52:59 np0005591760 nova_compute[248045]: 2026-01-22 09:52:59.190 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:52:59 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:59.190 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:52:59 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:52:59.191 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:52:59 np0005591760 funny_pascal[253481]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:52:59 np0005591760 funny_pascal[253481]: --> All data devices are unavailable
Jan 22 04:52:59 np0005591760 systemd[1]: libpod-359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2.scope: Deactivated successfully.
Jan 22 04:52:59 np0005591760 podman[253496]: 2026-01-22 09:52:59.295227718 +0000 UTC m=+0.017125870 container died 359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:52:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c6c5391589dc8a0b3e346dd4f2a206a96cc5f7cb82cc758f6876726ed0974e2f-merged.mount: Deactivated successfully.
Jan 22 04:52:59 np0005591760 podman[253496]: 2026-01-22 09:52:59.347696578 +0000 UTC m=+0.069594721 container remove 359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:52:59 np0005591760 systemd[1]: libpod-conmon-359dc4d5dbf5bc53e0b5c3dba40244bd3e0cf4ff4abf508a2aee41776f4ebef2.scope: Deactivated successfully.
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105905999706974 of space, bias 1.0, pg target 0.3317717999120922 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:52:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:52:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:52:59 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:52:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:52:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:52:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:52:59.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.802855216 +0000 UTC m=+0.032877857 container create 7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:52:59 np0005591760 systemd[1]: Started libpod-conmon-7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e.scope.
Jan 22 04:52:59 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.871829676 +0000 UTC m=+0.101852328 container init 7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_heyrovsky, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.875960812 +0000 UTC m=+0.105983455 container start 7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_heyrovsky, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.877262035 +0000 UTC m=+0.107284677 container attach 7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_heyrovsky, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:52:59 np0005591760 xenodochial_heyrovsky[253601]: 167 167
Jan 22 04:52:59 np0005591760 systemd[1]: libpod-7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e.scope: Deactivated successfully.
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.880376875 +0000 UTC m=+0.110399518 container died 7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.790950979 +0000 UTC m=+0.020973641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:52:59 np0005591760 systemd[1]: var-lib-containers-storage-overlay-daf2e288cb89ca928cd499a2ca2dabe7eda4dbf10613e4bdefd647f1a80e2a54-merged.mount: Deactivated successfully.
Jan 22 04:52:59 np0005591760 podman[253588]: 2026-01-22 09:52:59.899468569 +0000 UTC m=+0.129491211 container remove 7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_heyrovsky, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:52:59 np0005591760 systemd[1]: libpod-conmon-7a236f4419fa0daa490ea072bdc5c18d17af7cea2891c36af60b3288fba4215e.scope: Deactivated successfully.
Jan 22 04:53:00 np0005591760 podman[253624]: 2026-01-22 09:53:00.065602593 +0000 UTC m=+0.043933656 container create bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jones, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:53:00 np0005591760 systemd[1]: Started libpod-conmon-bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952.scope.
Jan 22 04:53:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:53:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de71c20a1ac25ed46f54ba13ec51ad5f9221e839a0c4c79c33e3dfa55bcb347/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de71c20a1ac25ed46f54ba13ec51ad5f9221e839a0c4c79c33e3dfa55bcb347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de71c20a1ac25ed46f54ba13ec51ad5f9221e839a0c4c79c33e3dfa55bcb347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de71c20a1ac25ed46f54ba13ec51ad5f9221e839a0c4c79c33e3dfa55bcb347/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:00 np0005591760 podman[253624]: 2026-01-22 09:53:00.135104998 +0000 UTC m=+0.113436062 container init bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:53:00 np0005591760 nova_compute[248045]: 2026-01-22 09:53:00.135 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:00 np0005591760 podman[253624]: 2026-01-22 09:53:00.141060814 +0000 UTC m=+0.119391876 container start bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jones, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:53:00 np0005591760 podman[253624]: 2026-01-22 09:53:00.14261862 +0000 UTC m=+0.120949704 container attach bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:53:00 np0005591760 podman[253624]: 2026-01-22 09:53:00.049109096 +0000 UTC m=+0.027440180 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:53:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 22 04:53:00 np0005591760 nova_compute[248045]: 2026-01-22 09:53:00.285 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:00.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:00 np0005591760 friendly_jones[253637]: {
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:    "0": [
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:        {
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "devices": [
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "/dev/loop3"
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            ],
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "lv_name": "ceph_lv0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "lv_size": "21470642176",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "name": "ceph_lv0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "tags": {
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.cluster_name": "ceph",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.crush_device_class": "",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.encrypted": "0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.osd_id": "0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.type": "block",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.vdo": "0",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:                "ceph.with_tpm": "0"
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            },
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "type": "block",
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:            "vg_name": "ceph_vg0"
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:        }
Jan 22 04:53:00 np0005591760 friendly_jones[253637]:    ]
Jan 22 04:53:00 np0005591760 friendly_jones[253637]: }
Jan 22 04:53:00 np0005591760 systemd[1]: libpod-bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952.scope: Deactivated successfully.
Jan 22 04:53:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:00 np0005591760 podman[253646]: 2026-01-22 09:53:00.444555409 +0000 UTC m=+0.019126379 container died bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:53:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4de71c20a1ac25ed46f54ba13ec51ad5f9221e839a0c4c79c33e3dfa55bcb347-merged.mount: Deactivated successfully.
Jan 22 04:53:00 np0005591760 podman[253646]: 2026-01-22 09:53:00.469499845 +0000 UTC m=+0.044070804 container remove bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:53:00 np0005591760 systemd[1]: libpod-conmon-bfc94de4e5d11d1650c0d009a5144cbe58213726cdccb8464c7f4844b8394952.scope: Deactivated successfully.
Jan 22 04:53:00 np0005591760 podman[253743]: 2026-01-22 09:53:00.924399703 +0000 UTC m=+0.030996802 container create 03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:53:00 np0005591760 systemd[1]: Started libpod-conmon-03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268.scope.
Jan 22 04:53:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:53:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:00 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:00 np0005591760 podman[253743]: 2026-01-22 09:53:00.983209356 +0000 UTC m=+0.089806455 container init 03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:53:00 np0005591760 podman[253743]: 2026-01-22 09:53:00.987840093 +0000 UTC m=+0.094437182 container start 03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:53:00 np0005591760 podman[253743]: 2026-01-22 09:53:00.989179989 +0000 UTC m=+0.095777078 container attach 03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:53:00 np0005591760 trusting_proskuriakova[253757]: 167 167
Jan 22 04:53:00 np0005591760 systemd[1]: libpod-03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268.scope: Deactivated successfully.
Jan 22 04:53:00 np0005591760 podman[253743]: 2026-01-22 09:53:00.992013259 +0000 UTC m=+0.098610347 container died 03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:53:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c3d8ec66263d7063835b9328e8c29634b14f468cfc7222d8f4287fda078ae152-merged.mount: Deactivated successfully.
Jan 22 04:53:01 np0005591760 podman[253743]: 2026-01-22 09:53:00.912409064 +0000 UTC m=+0.019006173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:53:01 np0005591760 podman[253743]: 2026-01-22 09:53:01.014731478 +0000 UTC m=+0.121328566 container remove 03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_proskuriakova, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:53:01 np0005591760 systemd[1]: libpod-conmon-03e6f7ebda64fef2407535363ec511d997055c0378157c6886509de40a37b268.scope: Deactivated successfully.
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.153888876 +0000 UTC m=+0.031693826 container create 766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haslett, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:53:01 np0005591760 systemd[1]: Started libpod-conmon-766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94.scope.
Jan 22 04:53:01 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:53:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fc72fc2f8534c7490fb986ec3ec62938d5c59797c4c46c911a3f80f83ef0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fc72fc2f8534c7490fb986ec3ec62938d5c59797c4c46c911a3f80f83ef0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fc72fc2f8534c7490fb986ec3ec62938d5c59797c4c46c911a3f80f83ef0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1fc72fc2f8534c7490fb986ec3ec62938d5c59797c4c46c911a3f80f83ef0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.209011018 +0000 UTC m=+0.086815969 container init 766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.216678119 +0000 UTC m=+0.094483069 container start 766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.217997826 +0000 UTC m=+0.095802796 container attach 766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haslett, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.142012422 +0000 UTC m=+0.019817392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:53:01 np0005591760 podman[253789]: 2026-01-22 09:53:01.239351964 +0000 UTC m=+0.058494630 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 04:53:01 np0005591760 lvm[253888]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:53:01 np0005591760 vibrant_haslett[253793]: {}
Jan 22 04:53:01 np0005591760 lvm[253888]: VG ceph_vg0 finished
Jan 22 04:53:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:01 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4003000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:01.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:01 np0005591760 systemd[1]: libpod-766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94.scope: Deactivated successfully.
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.732036229 +0000 UTC m=+0.609841178 container died 766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Jan 22 04:53:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ef1fc72fc2f8534c7490fb986ec3ec62938d5c59797c4c46c911a3f80f83ef0a-merged.mount: Deactivated successfully.
Jan 22 04:53:01 np0005591760 podman[253779]: 2026-01-22 09:53:01.759551187 +0000 UTC m=+0.637356138 container remove 766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:53:01 np0005591760 systemd[1]: libpod-conmon-766726036c8c7d4be0f12ef7179dd31e4860160abd43fbde944a526f9380eb94.scope: Deactivated successfully.
Jan 22 04:53:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:53:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:53:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:53:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:53:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Jan 22 04:53:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095302 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:53:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:02 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:53:02 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:53:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:02 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:03 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:03.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095303 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:53:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Jan 22 04:53:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4003000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:04 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:05 np0005591760 nova_compute[248045]: 2026-01-22 09:53:05.137 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:05 np0005591760 nova_compute[248045]: 2026-01-22 09:53:05.287 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:05 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:05.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:06 np0005591760 podman[253929]: 2026-01-22 09:53:06.119649646 +0000 UTC m=+0.103512238 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 04:53:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 89 op/s
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.302 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.302 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.314 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.314 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.314 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.326 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:06 np0005591760 nova_compute[248045]: 2026-01-22 09:53:06.566 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:06 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:07.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:07.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:07.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:07.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:07] "GET /metrics HTTP/1.1" 200 48587 "" "Prometheus/2.51.0"
Jan 22 04:53:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:07] "GET /metrics HTTP/1.1" 200 48587 "" "Prometheus/2.51.0"
Jan 22 04:53:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:07 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:07.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 89 op/s
Jan 22 04:53:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:53:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:53:08 np0005591760 nova_compute[248045]: 2026-01-22 09:53:08.333 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:08 np0005591760 nova_compute[248045]: 2026-01-22 09:53:08.334 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:08 np0005591760 nova_compute[248045]: 2026-01-22 09:53:08.334 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:53:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:08 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 04:53:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:08.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:08.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:08.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:08.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:08 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:09 np0005591760 nova_compute[248045]: 2026-01-22 09:53:09.296 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:09 np0005591760 nova_compute[248045]: 2026-01-22 09:53:09.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:09 np0005591760 nova_compute[248045]: 2026-01-22 09:53:09.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:09 np0005591760 nova_compute[248045]: 2026-01-22 09:53:09.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:09 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c0084a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:09.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:10 np0005591760 nova_compute[248045]: 2026-01-22 09:53:10.138 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Jan 22 04:53:10 np0005591760 nova_compute[248045]: 2026-01-22 09:53:10.288 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:10 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4004100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.314 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.315 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.316 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:53:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464563926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.655 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.706 248049 DEBUG nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.707 248049 DEBUG nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 04:53:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:11 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c0084c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:53:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:11.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.915 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.916 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4406MB free_disk=59.921791076660156GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.916 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.916 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.999 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Instance db3f9d63-cffc-4b71-b42f-7bd2d9e41955 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.999 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:53:11 np0005591760 nova_compute[248045]: 2026-01-22 09:53:11.999 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.011 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing inventories for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.050 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating ProviderTree inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.050 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.061 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing aggregate associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.078 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing trait associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, traits: HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,HW_CPU_X86_AVX512VAES,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.100 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 22 04:53:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:53:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131356922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.447 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.452 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.483 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updated inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.483 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.484 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.497 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:53:12 np0005591760 nova_compute[248045]: 2026-01-22 09:53:12.497 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:12 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:13 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4004a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:13.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 04:53:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c0084e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.496 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.497 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.497 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.762 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.762 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquired lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.762 248049 DEBUG nova.network.neutron [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 04:53:14 np0005591760 nova_compute[248045]: 2026-01-22 09:53:14.763 248049 DEBUG nova.objects.instance [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lazy-loading 'info_cache' on Instance uuid db3f9d63-cffc-4b71-b42f-7bd2d9e41955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:53:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:14 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:15 np0005591760 nova_compute[248045]: 2026-01-22 09:53:15.139 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:15 np0005591760 nova_compute[248045]: 2026-01-22 09:53:15.289 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:15 np0005591760 nova_compute[248045]: 2026-01-22 09:53:15.513 248049 DEBUG nova.network.neutron [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updating instance_info_cache with network_info: [{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:53:15 np0005591760 nova_compute[248045]: 2026-01-22 09:53:15.526 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Releasing lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:53:15 np0005591760 nova_compute[248045]: 2026-01-22 09:53:15.526 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 04:53:15 np0005591760 nova_compute[248045]: 2026-01-22 09:53:15.526 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:53:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:15 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:15.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 04:53:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:16.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4004a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:16 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:17.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:17.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:17] "GET /metrics HTTP/1.1" 200 48587 "" "Prometheus/2.51.0"
Jan 22 04:53:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:17] "GET /metrics HTTP/1.1" 200 48587 "" "Prometheus/2.51.0"
Jan 22 04:53:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:17 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:17.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:18 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 04:53:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:18.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:18.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:18.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:18.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:18.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:18 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4004a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:53:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:53:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:53:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:53:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:53:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:53:19 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:19Z|00034|binding|INFO|Releasing lport 73a4c783-9ced-4308-bd61-5ef603c35996 from this chassis (sb_readonly=0)
Jan 22 04:53:19 np0005591760 nova_compute[248045]: 2026-01-22 09:53:19.595 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:19 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:53:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:19.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.140 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.211 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.211 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.211 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.211 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.212 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.212 248049 INFO nova.compute.manager [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Terminating instance#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.213 248049 DEBUG nova.compute.manager [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 04:53:20 np0005591760 kernel: tap3ae9c689-1b (unregistering): left promiscuous mode
Jan 22 04:53:20 np0005591760 NetworkManager[48920]: <info>  [1769075600.2418] device (tap3ae9c689-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 04:53:20 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:20Z|00035|binding|INFO|Releasing lport 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 from this chassis (sb_readonly=0)
Jan 22 04:53:20 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:20Z|00036|binding|INFO|Setting lport 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 down in Southbound
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.249 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:20Z|00037|binding|INFO|Removing iface tap3ae9c689-1b ovn-installed in OVS
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.251 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.254 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:7b:30 10.100.0.14'], port_security=['fa:16:3e:31:7b:30 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'db3f9d63-cffc-4b71-b42f-7bd2d9e41955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6967a5cb-6cc9-4914-adb9-bd594f436add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51ac6686-d036-4e59-bb31-55b907c04e7d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.255 164103 INFO neutron.agent.ovn.metadata.agent [-] Port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 in datapath b81a32f4-9ca3-4bd8-ba9d-4dddb997108a unbound from our chassis#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.256 164103 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.257 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[80ec3f54-d652-44dd-aac3-8b868b55cd21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.257 164103 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a namespace which is not needed anymore#033[00m
Jan 22 04:53:20 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.275 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 22 04:53:20 np0005591760 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 12.532s CPU time.
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.290 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 systemd-machined[216371]: Machine qemu-1-instance-00000001 terminated.
Jan 22 04:53:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:20.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:20 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [NOTICE]   (253132) : haproxy version is 2.8.14-c23fe91
Jan 22 04:53:20 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [NOTICE]   (253132) : path to executable is /usr/sbin/haproxy
Jan 22 04:53:20 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [WARNING]  (253132) : Exiting Master process...
Jan 22 04:53:20 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [ALERT]    (253132) : Current worker (253134) exited with code 143 (Terminated)
Jan 22 04:53:20 np0005591760 neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a[253128]: [WARNING]  (253132) : All workers exited. Exiting... (0)
Jan 22 04:53:20 np0005591760 systemd[1]: libpod-3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b.scope: Deactivated successfully.
Jan 22 04:53:20 np0005591760 conmon[253128]: conmon 3f01dd4e507d28c05976 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b.scope/container/memory.events
Jan 22 04:53:20 np0005591760 podman[254058]: 2026-01-22 09:53:20.356365693 +0000 UTC m=+0.033552980 container died 3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 04:53:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b-userdata-shm.mount: Deactivated successfully.
Jan 22 04:53:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay-7b93516223b5b4cefffc25768997c6b84c51f68b6c0185a6589bc09fb1575504-merged.mount: Deactivated successfully.
Jan 22 04:53:20 np0005591760 podman[254058]: 2026-01-22 09:53:20.38245692 +0000 UTC m=+0.059644206 container cleanup 3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 04:53:20 np0005591760 systemd[1]: libpod-conmon-3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b.scope: Deactivated successfully.
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.429 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 podman[254081]: 2026-01-22 09:53:20.431862287 +0000 UTC m=+0.030807195 container remove 3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.433 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.436 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[7108ba36-83a2-415f-a83c-ca3e1287ad9c]: (4, ('Thu Jan 22 09:53:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a (3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b)\n3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b\nThu Jan 22 09:53:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a (3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b)\n3f01dd4e507d28c0597648e5f9dd73d7ad3b18404605b3e6918c417f7fbd5b5b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.437 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1d99ab-a4ec-46c8-9d3e-0ce724110ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.438 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb81a32f4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.439 248049 INFO nova.virt.libvirt.driver [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Instance destroyed successfully.#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.439 248049 DEBUG nova.objects.instance [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'resources' on Instance uuid db3f9d63-cffc-4b71-b42f-7bd2d9e41955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.441 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 kernel: tapb81a32f4-90: left promiscuous mode
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.449 248049 DEBUG nova.virt.libvirt.vif [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:52:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-751480636',display_name='tempest-TestNetworkBasicOps-server-751480636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-751480636',id=1,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILiBlL+TvH2gMCwjifyc/7Gm6jqtzdZz82DpKlqMzB6gNMcB2nyBl3WjXUnQtbtT+iCTUq5H4q1VSudwoC9f3p38sE8mJ1gg5Tmnybu9QHQy5PAl4rj4HJKymnAARdITw==',key_name='tempest-TestNetworkBasicOps-1904070563',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:52:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-7yqiqcub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:52:22Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=db3f9d63-cffc-4b71-b42f-7bd2d9e41955,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.449 248049 DEBUG nova.network.os_vif_util [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.450 248049 DEBUG nova.network.os_vif_util [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.450 248049 DEBUG os_vif [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.451 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.451 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ae9c689-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.453 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.455 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:53:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.464 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.465 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.465 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[2f97a659-0e66-431e-9c58-4619100f35c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.467 248049 INFO os_vif [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:7b:30,bridge_name='br-int',has_traffic_filtering=True,id=3ae9c689-1b36-4b9f-b18d-ee8cb51409c6,network=Network(b81a32f4-9ca3-4bd8-ba9d-4dddb997108a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae9c689-1b')#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.473 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[71260e29-c6b7-4d3b-8725-9f50972a85d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.473 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[695498a9-8104-4940-8ae8-11be0c5c5898]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.485 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[993df17d-09fd-48aa-9c72-a1f5cd53d0c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 312420, 'reachable_time': 42540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254110, 'error': None, 'target': 'ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 systemd[1]: run-netns-ovnmeta\x2db81a32f4\x2d9ca3\x2d4bd8\x2dba9d\x2d4dddb997108a.mount: Deactivated successfully.
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.494 164492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b81a32f4-9ca3-4bd8-ba9d-4dddb997108a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 04:53:20 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:20.495 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[985a0d0c-6555-48ea-ad10-9f5a79da219e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.580 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-changed-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.580 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Refreshing instance network info cache due to event network-changed-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.580 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.581 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.581 248049 DEBUG nova.network.neutron [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Refreshing network info cache for port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.630 248049 INFO nova.virt.libvirt.driver [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Deleting instance files /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955_del#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.631 248049 INFO nova.virt.libvirt.driver [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Deletion of /var/lib/nova/instances/db3f9d63-cffc-4b71-b42f-7bd2d9e41955_del complete#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.662 248049 DEBUG nova.virt.libvirt.host [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.662 248049 INFO nova.virt.libvirt.host [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] UEFI support detected#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.663 248049 INFO nova.compute.manager [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.664 248049 DEBUG oslo.service.loopingcall [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.664 248049 DEBUG nova.compute.manager [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 04:53:20 np0005591760 nova_compute[248045]: 2026-01-22 09:53:20.664 248049 DEBUG nova.network.neutron [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 04:53:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:20 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.056 248049 DEBUG nova.network.neutron [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.074 248049 INFO nova.compute.manager [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Took 0.41 seconds to deallocate network for instance.#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.109 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.110 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.131 248049 DEBUG nova.compute.manager [req-e1771b05-5c7a-4728-addf-a60b106e144f req-c2fc915d-20c1-4fdb-ab6b-6c123b0de338 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-vif-deleted-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.147 248049 DEBUG oslo_concurrency.processutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.412 248049 DEBUG nova.network.neutron [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updated VIF entry in instance network info cache for port 3ae9c689-1b36-4b9f-b18d-ee8cb51409c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.412 248049 DEBUG nova.network.neutron [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Updating instance_info_cache with network_info: [{"id": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "address": "fa:16:3e:31:7b:30", "network": {"id": "b81a32f4-9ca3-4bd8-ba9d-4dddb997108a", "bridge": "br-int", "label": "tempest-network-smoke--194676211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae9c689-1b", "ovs_interfaceid": "3ae9c689-1b36-4b9f-b18d-ee8cb51409c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.432 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-db3f9d63-cffc-4b71-b42f-7bd2d9e41955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.432 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-vif-unplugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.432 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.432 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.432 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] No waiting events found dispatching network-vif-unplugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-vif-unplugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG oslo_concurrency.lockutils [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.433 248049 DEBUG nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] No waiting events found dispatching network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.434 248049 WARNING nova.compute.manager [req-8401c6db-3c95-44bb-9038-37ce2285e453 req-2bc7597c-9e20-407e-87fc-a7afcc21afdf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Received unexpected event network-vif-plugged-3ae9c689-1b36-4b9f-b18d-ee8cb51409c6 for instance with vm_state active and task_state deleting.#033[00m
Jan 22 04:53:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:53:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51117405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.494 248049 DEBUG oslo_concurrency.processutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.498 248049 DEBUG nova.compute.provider_tree [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.507 248049 DEBUG nova.scheduler.client.report [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.519 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.537 248049 INFO nova.scheduler.client.report [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Deleted allocations for instance db3f9d63-cffc-4b71-b42f-7bd2d9e41955#033[00m
Jan 22 04:53:21 np0005591760 nova_compute[248045]: 2026-01-22 09:53:21.583 248049 DEBUG oslo_concurrency.lockutils [None req-0f6394ba-c763-4587-8b46-b2f8b6d97448 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "db3f9d63-cffc-4b71-b42f-7bd2d9e41955" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:21 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4004a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:21.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:22 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 22 04:53:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:22.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:22 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f697c008540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:23 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f699c005720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:23.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:24 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Jan 22 04:53:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:24.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4005b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:24 np0005591760 kernel: ganesha.nfsd[253281]: segfault at 50 ip 00007f69c421f32e sp 00007f6948ff8210 error 4 in libntirpc.so.5.8[7f69c4204000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 22 04:53:24 np0005591760 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 22 04:53:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[234383]: 22/01/2026 09:53:24 : epoch 6971f212 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69a4005b20 fd 48 proxy ignored for local
Jan 22 04:53:25 np0005591760 systemd[1]: Started Process Core Dump (PID 254152/UID 0).
Jan 22 04:53:25 np0005591760 nova_compute[248045]: 2026-01-22 09:53:25.291 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:25 np0005591760 nova_compute[248045]: 2026-01-22 09:53:25.453 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:25.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:26 np0005591760 systemd-coredump[254153]: Process 234387 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 81:#012#0  0x00007f69c421f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 22 04:53:26 np0005591760 systemd[1]: systemd-coredump@3-254152-0.service: Deactivated successfully.
Jan 22 04:53:26 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:53:26 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 04:53:26 np0005591760 podman[254160]: 2026-01-22 09:53:26.202183905 +0000 UTC m=+0.028203729 container died bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:53:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e998b9f6b9dcc2b8fa3ad99b12273bc5809f4260869226a24610ab0263771fc1-merged.mount: Deactivated successfully.
Jan 22 04:53:26 np0005591760 podman[254160]: 2026-01-22 09:53:26.219314772 +0000 UTC m=+0.045334576 container remove bc9745f833ab15a95783334c6971b2bac3c4abfd0e382b634fe7b91601565ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:53:26 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Main process exited, code=exited, status=139/n/a
Jan 22 04:53:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Jan 22 04:53:26 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Failed with result 'exit-code'.
Jan 22 04:53:26 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.324s CPU time.
Jan 22 04:53:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:27.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:27.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:27.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:27.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:27] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 04:53:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:27] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 04:53:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:27 np0005591760 nova_compute[248045]: 2026-01-22 09:53:27.959 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:28 np0005591760 nova_compute[248045]: 2026-01-22 09:53:28.066 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Jan 22 04:53:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:28.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:28.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:28.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:28.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:29.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 04:53:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9768 writes, 35K keys, 9768 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9768 writes, 2784 syncs, 3.51 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1269 writes, 2760 keys, 1269 commit groups, 1.0 writes per commit group, ingest: 2.21 MB, 0.00 MB/s#012Interval WAL: 1269 writes, 603 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Jan 22 04:53:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Jan 22 04:53:30 np0005591760 nova_compute[248045]: 2026-01-22 09:53:30.293 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:30.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:30 np0005591760 nova_compute[248045]: 2026-01-22 09:53:30.455 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095330 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:53:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:31.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:32 np0005591760 podman[254200]: 2026-01-22 09:53:32.045434968 +0000 UTC m=+0.037415842 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 22 04:53:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Jan 22 04:53:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:53:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:53:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:33.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 04:53:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:35 np0005591760 nova_compute[248045]: 2026-01-22 09:53:35.294 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:35 np0005591760 nova_compute[248045]: 2026-01-22 09:53:35.438 248049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769075600.4375186, db3f9d63-cffc-4b71-b42f-7bd2d9e41955 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:53:35 np0005591760 nova_compute[248045]: 2026-01-22 09:53:35.439 248049 INFO nova.compute.manager [-] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] VM Stopped (Lifecycle Event)#033[00m
Jan 22 04:53:35 np0005591760 nova_compute[248045]: 2026-01-22 09:53:35.452 248049 DEBUG nova.compute.manager [None req-edb388ce-ef00-49ec-aee7-9a641704e85d - - - - - -] [instance: db3f9d63-cffc-4b71-b42f-7bd2d9e41955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:53:35 np0005591760 nova_compute[248045]: 2026-01-22 09:53:35.455 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:35.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 04:53:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:53:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:36.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:53:36 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Scheduled restart job, restart counter is at 4.
Jan 22 04:53:36 np0005591760 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:53:36 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.324s CPU time.
Jan 22 04:53:36 np0005591760 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:53:36 np0005591760 podman[254245]: 2026-01-22 09:53:36.569154375 +0000 UTC m=+0.060252022 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 04:53:36 np0005591760 podman[254308]: 2026-01-22 09:53:36.651461675 +0000 UTC m=+0.028392885 container create b976a0d59c790c2e84b0e0dc159b49f2973a1ca3667d6bf61038f99b8bf78f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:53:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0026ed777c6bd7a86b617509426b6e5727c15925a14b4ba7e242bbe790b32c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0026ed777c6bd7a86b617509426b6e5727c15925a14b4ba7e242bbe790b32c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0026ed777c6bd7a86b617509426b6e5727c15925a14b4ba7e242bbe790b32c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:36 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0026ed777c6bd7a86b617509426b6e5727c15925a14b4ba7e242bbe790b32c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ylzmiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:36 np0005591760 podman[254308]: 2026-01-22 09:53:36.698742127 +0000 UTC m=+0.075673347 container init b976a0d59c790c2e84b0e0dc159b49f2973a1ca3667d6bf61038f99b8bf78f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 04:53:36 np0005591760 podman[254308]: 2026-01-22 09:53:36.704599718 +0000 UTC m=+0.081530928 container start b976a0d59c790c2e84b0e0dc159b49f2973a1ca3667d6bf61038f99b8bf78f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:53:36 np0005591760 bash[254308]: b976a0d59c790c2e84b0e0dc159b49f2973a1ca3667d6bf61038f99b8bf78f9d
Jan 22 04:53:36 np0005591760 podman[254308]: 2026-01-22 09:53:36.640878458 +0000 UTC m=+0.017809688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:53:36 np0005591760 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 22 04:53:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:53:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:37.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:37.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:37.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:37.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:37] "GET /metrics HTTP/1.1" 200 48583 "" "Prometheus/2.51.0"
Jan 22 04:53:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:37] "GET /metrics HTTP/1.1" 200 48583 "" "Prometheus/2.51.0"
Jan 22 04:53:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:53:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:38.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:38.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:38.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:39.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.167 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.168 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.181 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.228 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.229 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.233 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.233 248049 INFO nova.compute.claims [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 04:53:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.294 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.306 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:40.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.456 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:53:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268705928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.631 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.634 248049 DEBUG nova.compute.provider_tree [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.646 248049 DEBUG nova.scheduler.client.report [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.660 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.661 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.695 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.695 248049 DEBUG nova.network.neutron [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.710 248049 INFO nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.721 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.798 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.799 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.799 248049 INFO nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Creating image(s)#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.815 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.830 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.844 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.846 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.891 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.892 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "9db187949728ea707722fd244d769f131efa8688" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.892 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "9db187949728ea707722fd244d769f131efa8688" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.893 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "9db187949728ea707722fd244d769f131efa8688" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.907 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.909 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:40 np0005591760 nova_compute[248045]: 2026-01-22 09:53:40.920 248049 DEBUG nova.policy [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4428dd9b0fb64c25b8f33b0050d4ef6f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.031 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.067 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] resizing rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.113 248049 DEBUG nova.objects.instance [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'migration_context' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.125 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.125 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Ensure instance console log exists: /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.126 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.126 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.126 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.400 248049 DEBUG nova.network.neutron [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Successfully created port: abfe2221-d53c-4fee-9063-5d7c426351e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 04:53:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:41.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.947 248049 DEBUG nova.network.neutron [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Successfully updated port: abfe2221-d53c-4fee-9063-5d7c426351e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.958 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.958 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:53:41 np0005591760 nova_compute[248045]: 2026-01-22 09:53:41.958 248049 DEBUG nova.network.neutron [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.032 248049 DEBUG nova.compute.manager [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-changed-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.032 248049 DEBUG nova.compute.manager [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing instance network info cache due to event network-changed-abfe2221-d53c-4fee-9063-5d7c426351e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.032 248049 DEBUG oslo_concurrency.lockutils [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.072 248049 DEBUG nova.network.neutron [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 04:53:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:53:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:42.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.550 248049 DEBUG nova.network.neutron [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.561 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.562 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Instance network_info: |[{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.562 248049 DEBUG oslo_concurrency.lockutils [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.562 248049 DEBUG nova.network.neutron [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing network info cache for port abfe2221-d53c-4fee-9063-5d7c426351e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.564 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Start _get_guest_xml network_info=[{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T09:51:33Z,direct_url=<?>,disk_format='qcow2',id=bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a894ac5b4f744f208fa506d5e8f67970',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T09:51:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'encryption_format': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'encryption_options': None, 'image_id': 'bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.567 248049 WARNING nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.573 248049 DEBUG nova.virt.libvirt.host [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.573 248049 DEBUG nova.virt.libvirt.host [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.575 248049 DEBUG nova.virt.libvirt.host [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.575 248049 DEBUG nova.virt.libvirt.host [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.576 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.576 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T09:51:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6eff66ba-fb3e-4ca7-b05b-920b01d9affd',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T09:51:33Z,direct_url=<?>,disk_format='qcow2',id=bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a894ac5b4f744f208fa506d5e8f67970',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T09:51:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.576 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.577 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.577 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.577 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.577 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.577 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.577 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.578 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.578 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.578 248049 DEBUG nova.virt.hardware [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.580 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:53:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:53:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:53:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274147821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.921 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.939 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:42 np0005591760 nova_compute[248045]: 2026-01-22 09:53:42.941 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.240 248049 DEBUG nova.network.neutron [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updated VIF entry in instance network info cache for port abfe2221-d53c-4fee-9063-5d7c426351e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.240 248049 DEBUG nova.network.neutron [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.252 248049 DEBUG oslo_concurrency.lockutils [req-c40315c7-bf56-465b-bd3b-6fa4093ce573 req-5b6102a5-ddd1-4a60-bfb3-4158aed6b92c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:53:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:53:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424740312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.289 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.289 248049 DEBUG nova.virt.libvirt.vif [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T09:53:40Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.290 248049 DEBUG nova.network.os_vif_util [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.290 248049 DEBUG nova.network.os_vif_util [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.291 248049 DEBUG nova.objects.instance [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.299 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] End _get_guest_xml xml=<domain type="kvm">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <uuid>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</uuid>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <name>instance-00000003</name>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <memory>131072</memory>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <vcpu>1</vcpu>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:creationTime>2026-01-22 09:53:42</nova:creationTime>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:flavor name="m1.nano">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:memory>128</nova:memory>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:disk>1</nova:disk>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:swap>0</nova:swap>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:vcpus>1</nova:vcpus>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </nova:flavor>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:owner>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </nova:owner>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <nova:ports>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        </nova:port>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </nova:ports>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </nova:instance>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <sysinfo type="smbios">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <entry name="manufacturer">RDO</entry>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <entry name="product">OpenStack Compute</entry>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <entry name="serial">d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <entry name="uuid">d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <entry name="family">Virtual Machine</entry>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <boot dev="hd"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <smbios mode="sysinfo"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <vmcoreinfo/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <clock offset="utc">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <timer name="hpet" present="no"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <cpu mode="host-model" match="exact">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <disk type="network" device="disk">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <driver type="raw" cache="none"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <source protocol="rbd" name="vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <host name="192.168.122.100" port="6789"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <host name="192.168.122.102" port="6789"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <host name="192.168.122.101" port="6789"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <auth username="openstack">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <secret type="ceph" uuid="43df7a30-cf5f-5209-adfd-bf44298b19f2"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <target dev="vda" bus="virtio"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <disk type="network" device="cdrom">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <driver type="raw" cache="none"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <source protocol="rbd" name="vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <host name="192.168.122.100" port="6789"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <host name="192.168.122.102" port="6789"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <host name="192.168.122.101" port="6789"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <auth username="openstack">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:        <secret type="ceph" uuid="43df7a30-cf5f-5209-adfd-bf44298b19f2"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <target dev="sda" bus="sata"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <interface type="ethernet">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <mac address="fa:16:3e:37:cc:90"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <model type="virtio"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <mtu size="1442"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <target dev="tapabfe2221-d5"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <serial type="pty">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <log file="/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log" append="off"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <model type="virtio"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <input type="tablet" bus="usb"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <rng model="virtio">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <backend model="random">/dev/urandom</backend>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <controller type="usb" index="0"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    <memballoon model="virtio">
Jan 22 04:53:43 np0005591760 nova_compute[248045]:      <stats period="10"/>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:53:43 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:53:43 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:53:43 np0005591760 nova_compute[248045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.300 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Preparing to wait for external event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.300 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.300 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.300 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.301 248049 DEBUG nova.virt.libvirt.vif [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T09:53:40Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.301 248049 DEBUG nova.network.os_vif_util [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.302 248049 DEBUG nova.network.os_vif_util [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.302 248049 DEBUG os_vif [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.302 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.303 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.303 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.311 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.312 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabfe2221-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.314 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabfe2221-d5, col_values=(('external_ids', {'iface-id': 'abfe2221-d53c-4fee-9063-5d7c426351e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:cc:90', 'vm-uuid': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:43 np0005591760 NetworkManager[48920]: <info>  [1769075623.3180] manager: (tapabfe2221-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.319 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.322 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.327 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.328 248049 INFO os_vif [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5')#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.362 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.363 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.363 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No VIF found with MAC fa:16:3e:37:cc:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.363 248049 INFO nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Using config drive#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.379 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.631 248049 INFO nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Creating config drive at /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/disk.config#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.635 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno2cpb8p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.754 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno2cpb8p" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:53:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:43.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.803 248049 DEBUG nova.storage.rbd_utils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.805 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/disk.config d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.884 248049 DEBUG oslo_concurrency.processutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/disk.config d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.885 248049 INFO nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Deleting local config drive /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/disk.config because it was imported into RBD.#033[00m
Jan 22 04:53:43 np0005591760 kernel: tapabfe2221-d5: entered promiscuous mode
Jan 22 04:53:43 np0005591760 NetworkManager[48920]: <info>  [1769075623.9200] manager: (tapabfe2221-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 22 04:53:43 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:43Z|00038|binding|INFO|Claiming lport abfe2221-d53c-4fee-9063-5d7c426351e3 for this chassis.
Jan 22 04:53:43 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:43Z|00039|binding|INFO|abfe2221-d53c-4fee-9063-5d7c426351e3: Claiming fa:16:3e:37:cc:90 10.100.0.8
Jan 22 04:53:43 np0005591760 nova_compute[248045]: 2026-01-22 09:53:43.926 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.931 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:cc:90 10.100.0.8'], port_security=['fa:16:3e:37:cc:90 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '063acbef-2bd2-4b9b-b641-bc7c62945e62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be33538f-4886-46cf-b41d-06835b51122f, chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=abfe2221-d53c-4fee-9063-5d7c426351e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.932 164103 INFO neutron.agent.ovn.metadata.agent [-] Port abfe2221-d53c-4fee-9063-5d7c426351e3 in datapath a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 bound to our chassis#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.933 164103 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a68d0b24-a42d-487a-87cc-f5ecce0ddcc8#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.942 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[b50ec41c-6508-4824-8756-f8e9a989da7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.942 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa68d0b24-a1 in ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.944 253045 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa68d0b24-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.944 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[45d370d6-e330-45e6-8be2-423351369d5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.944 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[21a0cc94-4450-462b-ae69-4f26b1143d8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:43 np0005591760 systemd-machined[216371]: New machine qemu-2-instance-00000003.
Jan 22 04:53:43 np0005591760 systemd-udevd[254695]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.954 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[bc53b4c2-d0ce-4e26-ae67-c0105b609ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:43 np0005591760 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Jan 22 04:53:43 np0005591760 NetworkManager[48920]: <info>  [1769075623.9700] device (tapabfe2221-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:53:43 np0005591760 NetworkManager[48920]: <info>  [1769075623.9708] device (tapabfe2221-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.976 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[28fdfeaf-dcf1-477d-8cfd-cfe38fc21afc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:43 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:43.996 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[703ecda3-2ea7-46cf-9921-b7b9ab37328b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 systemd-udevd[254698]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:53:44 np0005591760 NetworkManager[48920]: <info>  [1769075624.0022] manager: (tapa68d0b24-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.001 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[70a81f44-308e-4636-a302-0209c3b44069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.020 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:44 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:44Z|00040|binding|INFO|Setting lport abfe2221-d53c-4fee-9063-5d7c426351e3 ovn-installed in OVS
Jan 22 04:53:44 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:44Z|00041|binding|INFO|Setting lport abfe2221-d53c-4fee-9063-5d7c426351e3 up in Southbound
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.025 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.025 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a4f7ee-5722-47ca-a7ff-f5e2f95715f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.028 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[78001894-32d1-4bc1-957f-d83e51dde7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 NetworkManager[48920]: <info>  [1769075624.0448] device (tapa68d0b24-a0): carrier: link connected
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.048 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[b948a682-17ee-455e-b347-d9ab0cd40527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.061 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[d0999201-01b8-442d-81d9-15231e107392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa68d0b24-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:7b:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 320342, 'reachable_time': 44295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254718, 'error': None, 'target': 'ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.073 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e150c1-cb14-495f-9316-b0a43cb6de78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:7b9b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 320342, 'tstamp': 320342}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254719, 'error': None, 'target': 'ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.085 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[8d889401-7556-4ba2-b5c0-e9e1f6bd0747]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa68d0b24-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:7b:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 320342, 'reachable_time': 44295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254720, 'error': None, 'target': 'ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.107 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc29043-8036-4cfa-80df-ed272439c015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.144 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[66836a98-9d0c-4039-a99d-c16a410c4469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.145 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa68d0b24-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.145 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.146 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa68d0b24-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:44 np0005591760 kernel: tapa68d0b24-a0: entered promiscuous mode
Jan 22 04:53:44 np0005591760 NetworkManager[48920]: <info>  [1769075624.1501] manager: (tapa68d0b24-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.150 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.152 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa68d0b24-a0, col_values=(('external_ids', {'iface-id': 'f3d58566-a72c-48ab-8f65-195b3d643ba5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:53:44 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:44Z|00042|binding|INFO|Releasing lport f3d58566-a72c-48ab-8f65-195b3d643ba5 from this chassis (sb_readonly=0)
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.153 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.170 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.170 164103 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a68d0b24-a42d-487a-87cc-f5ecce0ddcc8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a68d0b24-a42d-487a-87cc-f5ecce0ddcc8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.171 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[06594590-2f3b-44d2-99e2-b675ff50beb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.171 164103 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: global
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    log         /dev/log local0 debug
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    log-tag     haproxy-metadata-proxy-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    user        root
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    group       root
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    maxconn     1024
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    pidfile     /var/lib/neutron/external/pids/a68d0b24-a42d-487a-87cc-f5ecce0ddcc8.pid.haproxy
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    daemon
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: defaults
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    log global
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    mode http
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    option httplog
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    option dontlognull
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    option http-server-close
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    option forwardfor
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    retries                 3
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    timeout http-request    30s
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    timeout connect         30s
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    timeout client          32s
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    timeout server          32s
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    timeout http-keep-alive 30s
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: listen listener
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    bind 169.254.169.254:80
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]:    http-request add-header X-OVN-Network-ID a68d0b24-a42d-487a-87cc-f5ecce0ddcc8
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 04:53:44 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:44.172 164103 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'env', 'PROCESS_TAG=haproxy-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a68d0b24-a42d-487a-87cc-f5ecce0ddcc8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 04:53:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 22 04:53:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:44 np0005591760 podman[254785]: 2026-01-22 09:53:44.462161546 +0000 UTC m=+0.031646858 container create 03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 04:53:44 np0005591760 systemd[1]: Started libpod-conmon-03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00.scope.
Jan 22 04:53:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.501 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075624.5003195, d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.501 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] VM Started (Lifecycle Event)#033[00m
Jan 22 04:53:44 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3d55c022b22cda2078d5675258d435e2ce92ef8da17c45f5b343bff2145dd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 04:53:44 np0005591760 podman[254785]: 2026-01-22 09:53:44.514184918 +0000 UTC m=+0.083670249 container init 03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.516 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.518 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075624.5006294, d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:53:44 np0005591760 podman[254785]: 2026-01-22 09:53:44.518990345 +0000 UTC m=+0.088475657 container start 03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.518 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] VM Paused (Lifecycle Event)#033[00m
Jan 22 04:53:44 np0005591760 podman[254785]: 2026-01-22 09:53:44.448272528 +0000 UTC m=+0.017757860 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.529 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.531 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 04:53:44 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [NOTICE]   (254806) : New worker (254808) forked
Jan 22 04:53:44 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [NOTICE]   (254806) : Loading success.
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.544 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.947 248049 DEBUG nova.compute.manager [req-0d22ecb0-30a5-45aa-b1f7-96376360aedb req-60e0b948-a719-440f-8487-d8f263022328 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.947 248049 DEBUG oslo_concurrency.lockutils [req-0d22ecb0-30a5-45aa-b1f7-96376360aedb req-60e0b948-a719-440f-8487-d8f263022328 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.948 248049 DEBUG oslo_concurrency.lockutils [req-0d22ecb0-30a5-45aa-b1f7-96376360aedb req-60e0b948-a719-440f-8487-d8f263022328 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.948 248049 DEBUG oslo_concurrency.lockutils [req-0d22ecb0-30a5-45aa-b1f7-96376360aedb req-60e0b948-a719-440f-8487-d8f263022328 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.948 248049 DEBUG nova.compute.manager [req-0d22ecb0-30a5-45aa-b1f7-96376360aedb req-60e0b948-a719-440f-8487-d8f263022328 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Processing event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.949 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.951 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075624.9510868, d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.951 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] VM Resumed (Lifecycle Event)#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.953 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.955 248049 INFO nova.virt.libvirt.driver [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Instance spawned successfully.#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.955 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.967 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.971 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.973 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.973 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.973 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.973 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.974 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.974 248049 DEBUG nova.virt.libvirt.driver [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:53:44 np0005591760 nova_compute[248045]: 2026-01-22 09:53:44.997 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 04:53:45 np0005591760 nova_compute[248045]: 2026-01-22 09:53:45.037 248049 INFO nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Took 4.24 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 04:53:45 np0005591760 nova_compute[248045]: 2026-01-22 09:53:45.037 248049 DEBUG nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:53:45 np0005591760 nova_compute[248045]: 2026-01-22 09:53:45.083 248049 INFO nova.compute.manager [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Took 4.87 seconds to build instance.#033[00m
Jan 22 04:53:45 np0005591760 nova_compute[248045]: 2026-01-22 09:53:45.093 248049 DEBUG oslo_concurrency.lockutils [None req-fcd580e2-8d84-48a8-b48a-edb1594dc9ac 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:45 np0005591760 nova_compute[248045]: 2026-01-22 09:53:45.296 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 04:53:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 04:53:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:46.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 04:53:47 np0005591760 nova_compute[248045]: 2026-01-22 09:53:47.006 248049 DEBUG nova.compute.manager [req-b8bd96bf-d23c-47be-8b13-cc0380d8f621 req-06a4c969-64e9-4ec9-8278-d2f0ef82c089 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:47 np0005591760 nova_compute[248045]: 2026-01-22 09:53:47.006 248049 DEBUG oslo_concurrency.lockutils [req-b8bd96bf-d23c-47be-8b13-cc0380d8f621 req-06a4c969-64e9-4ec9-8278-d2f0ef82c089 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:47 np0005591760 nova_compute[248045]: 2026-01-22 09:53:47.006 248049 DEBUG oslo_concurrency.lockutils [req-b8bd96bf-d23c-47be-8b13-cc0380d8f621 req-06a4c969-64e9-4ec9-8278-d2f0ef82c089 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:47 np0005591760 nova_compute[248045]: 2026-01-22 09:53:47.006 248049 DEBUG oslo_concurrency.lockutils [req-b8bd96bf-d23c-47be-8b13-cc0380d8f621 req-06a4c969-64e9-4ec9-8278-d2f0ef82c089 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:47 np0005591760 nova_compute[248045]: 2026-01-22 09:53:47.006 248049 DEBUG nova.compute.manager [req-b8bd96bf-d23c-47be-8b13-cc0380d8f621 req-06a4c969-64e9-4ec9-8278-d2f0ef82c089 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:53:47 np0005591760 nova_compute[248045]: 2026-01-22 09:53:47.007 248049 WARNING nova.compute.manager [req-b8bd96bf-d23c-47be-8b13-cc0380d8f621 req-06a4c969-64e9-4ec9-8278-d2f0ef82c089 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received unexpected event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:53:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:47.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:47.311 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:53:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:47.311 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:53:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:53:47.312 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:53:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:47] "GET /metrics HTTP/1.1" 200 48583 "" "Prometheus/2.51.0"
Jan 22 04:53:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:47] "GET /metrics HTTP/1.1" 200 48583 "" "Prometheus/2.51.0"
Jan 22 04:53:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:47.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 04:53:48 np0005591760 nova_compute[248045]: 2026-01-22 09:53:48.319 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:48.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:48.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:48.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:48.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:53:49
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes']
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:53:49 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:49Z|00043|binding|INFO|Releasing lport f3d58566-a72c-48ab-8f65-195b3d643ba5 from this chassis (sb_readonly=0)
Jan 22 04:53:49 np0005591760 NetworkManager[48920]: <info>  [1769075629.2683] manager: (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 22 04:53:49 np0005591760 NetworkManager[48920]: <info>  [1769075629.2689] manager: (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.275 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:49 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:49Z|00044|binding|INFO|Releasing lport f3d58566-a72c-48ab-8f65-195b3d643ba5 from this chassis (sb_readonly=0)
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.316 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.319 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.493 248049 DEBUG nova.compute.manager [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-changed-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.493 248049 DEBUG nova.compute.manager [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing instance network info cache due to event network-changed-abfe2221-d53c-4fee-9063-5d7c426351e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.493 248049 DEBUG oslo_concurrency.lockutils [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.493 248049 DEBUG oslo_concurrency.lockutils [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:53:49 np0005591760 nova_compute[248045]: 2026-01-22 09:53:49.494 248049 DEBUG nova.network.neutron [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing network info cache for port abfe2221-d53c-4fee-9063-5d7c426351e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:53:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:53:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:50 np0005591760 nova_compute[248045]: 2026-01-22 09:53:50.254 248049 DEBUG nova.network.neutron [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updated VIF entry in instance network info cache for port abfe2221-d53c-4fee-9063-5d7c426351e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:53:50 np0005591760 nova_compute[248045]: 2026-01-22 09:53:50.255 248049 DEBUG nova.network.neutron [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:53:50 np0005591760 nova_compute[248045]: 2026-01-22 09:53:50.276 248049 DEBUG oslo_concurrency.lockutils [req-dbf6521b-11ef-48b4-a912-e542ff387390 req-5681a9a5-2ff3-4e24-81de-ff66cd76a46d e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:53:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 04:53:50 np0005591760 nova_compute[248045]: 2026-01-22 09:53:50.297 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:50 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095351 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:53:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 22 04:53:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078003d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078003d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:53 np0005591760 nova_compute[248045]: 2026-01-22 09:53:53.321 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d23c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 22 04:53:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:54 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088002250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c002530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:55 np0005591760 nova_compute[248045]: 2026-01-22 09:53:55.299 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078003d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:55.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 143 op/s
Jan 22 04:53:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:56.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:56 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:56Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:cc:90 10.100.0.8
Jan 22 04:53:56 np0005591760 ovn_controller[154073]: 2026-01-22T09:53:56Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:cc:90 10.100.0.8
Jan 22 04:53:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:56 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088002250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d2ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:57.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:57.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:57.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:57.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:57] "GET /metrics HTTP/1.1" 200 48597 "" "Prometheus/2.51.0"
Jan 22 04:53:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:53:57] "GET /metrics HTTP/1.1" 200 48597 "" "Prometheus/2.51.0"
Jan 22 04:53:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d2ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:53:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:53:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Jan 22 04:53:58 np0005591760 nova_compute[248045]: 2026-01-22 09:53:58.324 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:53:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:53:58.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:53:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:53:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:58 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078005070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:58.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:58.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:53:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:53:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078005990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007439476907924725 of space, bias 1.0, pg target 0.22318430723774174 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:53:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:53:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:53:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:53:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:53:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:53:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:53:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Jan 22 04:54:00 np0005591760 nova_compute[248045]: 2026-01-22 09:54:00.300 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:00.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:00 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d2ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d2ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d2ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:01.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:01 np0005591760 nova_compute[248045]: 2026-01-22 09:54:01.821 248049 INFO nova.compute.manager [None req-38fe0b88-694c-4a02-8f4a-0c584da6c943 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Get console output#033[00m
Jan 22 04:54:01 np0005591760 nova_compute[248045]: 2026-01-22 09:54:01.824 253225 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 22 04:54:02 np0005591760 podman[254921]: 2026-01-22 09:54:02.169435264 +0000 UTC m=+0.063188919 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 04:54:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:54:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088002250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.4 MiB/s wr, 70 op/s
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:54:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078005990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:03 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:03.079 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:54:03 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:03.080 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:54:03 np0005591760 nova_compute[248045]: 2026-01-22 09:54:03.081 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.276343757 +0000 UTC m=+0.029247779 container create 2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:54:03 np0005591760 systemd[1]: Started libpod-conmon-2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0.scope.
Jan 22 04:54:03 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:03 np0005591760 nova_compute[248045]: 2026-01-22 09:54:03.325 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.330698673 +0000 UTC m=+0.083602697 container init 2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.335867972 +0000 UTC m=+0.088771994 container start 2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_knuth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.337106928 +0000 UTC m=+0.090010951 container attach 2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_knuth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:54:03 np0005591760 elastic_knuth[255133]: 167 167
Jan 22 04:54:03 np0005591760 systemd[1]: libpod-2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0.scope: Deactivated successfully.
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.340081999 +0000 UTC m=+0.092986032 container died 2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_knuth, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:54:03 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6c3cdd4cef54d5c901fda1627208e3c09111cce4d63debfa3e5cfcf83ef7c6f1-merged.mount: Deactivated successfully.
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.263943681 +0000 UTC m=+0.016847724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:54:03 np0005591760 podman[255120]: 2026-01-22 09:54:03.36336636 +0000 UTC m=+0.116270384 container remove 2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:54:03 np0005591760 systemd[1]: libpod-conmon-2a52de94c46b8f52c8227322728fbb5d9c6ae9b9a1e285862cd8ba208a90dfd0.scope: Deactivated successfully.
Jan 22 04:54:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:03 np0005591760 podman[255156]: 2026-01-22 09:54:03.494237748 +0000 UTC m=+0.030080448 container create a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_burnell, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:54:03 np0005591760 systemd[1]: Started libpod-conmon-a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657.scope.
Jan 22 04:54:03 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:03 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2dc416565f29a0199702171d949131043b4ecc4d972b41b1d843be163891b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:03 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2dc416565f29a0199702171d949131043b4ecc4d972b41b1d843be163891b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:03 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2dc416565f29a0199702171d949131043b4ecc4d972b41b1d843be163891b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:03 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2dc416565f29a0199702171d949131043b4ecc4d972b41b1d843be163891b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:03 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2dc416565f29a0199702171d949131043b4ecc4d972b41b1d843be163891b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:03 np0005591760 podman[255156]: 2026-01-22 09:54:03.553849409 +0000 UTC m=+0.089692119 container init a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:54:03 np0005591760 podman[255156]: 2026-01-22 09:54:03.560285928 +0000 UTC m=+0.096128628 container start a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_burnell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:54:03 np0005591760 podman[255156]: 2026-01-22 09:54:03.561268331 +0000 UTC m=+0.097111051 container attach a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_burnell, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:54:03 np0005591760 podman[255156]: 2026-01-22 09:54:03.482145164 +0000 UTC m=+0.017987884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:54:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078005990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:03 np0005591760 inspiring_burnell[255169]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:54:03 np0005591760 inspiring_burnell[255169]: --> All data devices are unavailable
Jan 22 04:54:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:03.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:03 np0005591760 systemd[1]: libpod-a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657.scope: Deactivated successfully.
Jan 22 04:54:03 np0005591760 nova_compute[248045]: 2026-01-22 09:54:03.841 248049 DEBUG oslo_concurrency.lockutils [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "interface-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:03 np0005591760 nova_compute[248045]: 2026-01-22 09:54:03.841 248049 DEBUG oslo_concurrency.lockutils [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "interface-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:03 np0005591760 nova_compute[248045]: 2026-01-22 09:54:03.841 248049 DEBUG nova.objects.instance [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'flavor' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:03 np0005591760 podman[255184]: 2026-01-22 09:54:03.856182455 +0000 UTC m=+0.018932255 container died a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:54:03 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3a2dc416565f29a0199702171d949131043b4ecc4d972b41b1d843be163891b0-merged.mount: Deactivated successfully.
Jan 22 04:54:03 np0005591760 podman[255184]: 2026-01-22 09:54:03.879646254 +0000 UTC m=+0.042396045 container remove a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_burnell, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:54:03 np0005591760 systemd[1]: libpod-conmon-a0827cf8fca7f2d250cfbb44ade8dbf00876fa8849cd54257ca14536a99a4657.scope: Deactivated successfully.
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.301067758 +0000 UTC m=+0.030361941 container create 263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:54:04 np0005591760 systemd[1]: Started libpod-conmon-263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e.scope.
Jan 22 04:54:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.352471178 +0000 UTC m=+0.081765381 container init 263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:54:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:04.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.357800458 +0000 UTC m=+0.087094652 container start 263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.358934487 +0000 UTC m=+0.088228680 container attach 263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_poitras, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:54:04 np0005591760 festive_poitras[255294]: 167 167
Jan 22 04:54:04 np0005591760 systemd[1]: libpod-263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e.scope: Deactivated successfully.
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.361727334 +0000 UTC m=+0.091021517 container died 263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_poitras, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 04:54:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d437bb9b64b9d468c4e3a5282d4d89f8986dd401ce35273d372ceb697ffcede8-merged.mount: Deactivated successfully.
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.379711941 +0000 UTC m=+0.109006124 container remove 263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:54:04 np0005591760 podman[255280]: 2026-01-22 09:54:04.288426628 +0000 UTC m=+0.017720830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:54:04 np0005591760 systemd[1]: libpod-conmon-263089d15d45be6ce4f3ffe9f26fe9085f365d82966461ba510a376471304f2e.scope: Deactivated successfully.
Jan 22 04:54:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:04 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078005990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.51396126 +0000 UTC m=+0.029662381 container create b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_pare, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:54:04 np0005591760 systemd[1]: Started libpod-conmon-b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904.scope.
Jan 22 04:54:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14e430b8420520730da84bde6d47a11e7acb1730b5e31d6bb9e99402b62e68b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14e430b8420520730da84bde6d47a11e7acb1730b5e31d6bb9e99402b62e68b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14e430b8420520730da84bde6d47a11e7acb1730b5e31d6bb9e99402b62e68b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14e430b8420520730da84bde6d47a11e7acb1730b5e31d6bb9e99402b62e68b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.567970244 +0000 UTC m=+0.083671375 container init b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.573491597 +0000 UTC m=+0.089192718 container start b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.574751784 +0000 UTC m=+0.090452915 container attach b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.502045879 +0000 UTC m=+0.017747000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:54:04 np0005591760 kind_pare[255328]: {
Jan 22 04:54:04 np0005591760 kind_pare[255328]:    "0": [
Jan 22 04:54:04 np0005591760 kind_pare[255328]:        {
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "devices": [
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "/dev/loop3"
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            ],
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "lv_name": "ceph_lv0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "lv_size": "21470642176",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "name": "ceph_lv0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "tags": {
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.cluster_name": "ceph",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.crush_device_class": "",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.encrypted": "0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.osd_id": "0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.type": "block",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.vdo": "0",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:                "ceph.with_tpm": "0"
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            },
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "type": "block",
Jan 22 04:54:04 np0005591760 kind_pare[255328]:            "vg_name": "ceph_vg0"
Jan 22 04:54:04 np0005591760 kind_pare[255328]:        }
Jan 22 04:54:04 np0005591760 kind_pare[255328]:    ]
Jan 22 04:54:04 np0005591760 kind_pare[255328]: }
Jan 22 04:54:04 np0005591760 systemd[1]: libpod-b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904.scope: Deactivated successfully.
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.812154427 +0000 UTC m=+0.327855549 container died b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_pare, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:54:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c14e430b8420520730da84bde6d47a11e7acb1730b5e31d6bb9e99402b62e68b-merged.mount: Deactivated successfully.
Jan 22 04:54:04 np0005591760 podman[255315]: 2026-01-22 09:54:04.833767257 +0000 UTC m=+0.349468379 container remove b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_pare, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:54:04 np0005591760 systemd[1]: libpod-conmon-b1a4fe4c3457d1fd7ed0aa7ba02f5ebc4789cc699d1f4f4f65398c8ff9b6c904.scope: Deactivated successfully.
Jan 22 04:54:04 np0005591760 nova_compute[248045]: 2026-01-22 09:54:04.855 248049 DEBUG nova.objects.instance [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'pci_requests' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.4 MiB/s wr, 70 op/s
Jan 22 04:54:04 np0005591760 nova_compute[248045]: 2026-01-22 09:54:04.879 248049 DEBUG nova.network.neutron [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 04:54:04 np0005591760 nova_compute[248045]: 2026-01-22 09:54:04.996 248049 DEBUG nova.policy [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4428dd9b0fb64c25b8f33b0050d4ef6f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 04:54:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.250728699 +0000 UTC m=+0.027248368 container create bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:54:05 np0005591760 systemd[1]: Started libpod-conmon-bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c.scope.
Jan 22 04:54:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:05 np0005591760 nova_compute[248045]: 2026-01-22 09:54:05.303 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.30799736 +0000 UTC m=+0.084517031 container init bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.312243759 +0000 UTC m=+0.088763429 container start bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.313280434 +0000 UTC m=+0.089800104 container attach bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:54:05 np0005591760 nervous_darwin[255440]: 167 167
Jan 22 04:54:05 np0005591760 systemd[1]: libpod-bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c.scope: Deactivated successfully.
Jan 22 04:54:05 np0005591760 conmon[255440]: conmon bcb4a2e0ba69a792ad94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c.scope/container/memory.events
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.31663972 +0000 UTC m=+0.093159389 container died bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:54:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0e7b9f52e73b08143671c3efdf3c61c342be4c3fe20381f0e6bdf1a4df25d490-merged.mount: Deactivated successfully.
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.336631922 +0000 UTC m=+0.113151593 container remove bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:54:05 np0005591760 podman[255427]: 2026-01-22 09:54:05.239543223 +0000 UTC m=+0.016062913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:54:05 np0005591760 systemd[1]: libpod-conmon-bcb4a2e0ba69a792ad94374bb337cfeecf25a16a7ea3da7cd898f0fcd306a26c.scope: Deactivated successfully.
Jan 22 04:54:05 np0005591760 podman[255462]: 2026-01-22 09:54:05.468839941 +0000 UTC m=+0.030241894 container create b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:54:05 np0005591760 systemd[1]: Started libpod-conmon-b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff.scope.
Jan 22 04:54:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4495c8a8206eb45f12f9832928fcceebb38011860a973330ac514bddf54c32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4495c8a8206eb45f12f9832928fcceebb38011860a973330ac514bddf54c32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4495c8a8206eb45f12f9832928fcceebb38011860a973330ac514bddf54c32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4495c8a8206eb45f12f9832928fcceebb38011860a973330ac514bddf54c32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:05 np0005591760 podman[255462]: 2026-01-22 09:54:05.524803552 +0000 UTC m=+0.086205515 container init b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_matsumoto, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 04:54:05 np0005591760 podman[255462]: 2026-01-22 09:54:05.531235161 +0000 UTC m=+0.092637104 container start b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 04:54:05 np0005591760 podman[255462]: 2026-01-22 09:54:05.532346848 +0000 UTC m=+0.093748791 container attach b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 22 04:54:05 np0005591760 podman[255462]: 2026-01-22 09:54:05.45640996 +0000 UTC m=+0.017811923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:54:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:05.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:05 np0005591760 modest_matsumoto[255475]: {}
Jan 22 04:54:05 np0005591760 lvm[255552]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:54:05 np0005591760 lvm[255552]: VG ceph_vg0 finished
Jan 22 04:54:06 np0005591760 systemd[1]: libpod-b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff.scope: Deactivated successfully.
Jan 22 04:54:06 np0005591760 podman[255462]: 2026-01-22 09:54:06.014278876 +0000 UTC m=+0.575680819 container died b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_matsumoto, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:54:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0b4495c8a8206eb45f12f9832928fcceebb38011860a973330ac514bddf54c32-merged.mount: Deactivated successfully.
Jan 22 04:54:06 np0005591760 podman[255462]: 2026-01-22 09:54:06.0378295 +0000 UTC m=+0.599231443 container remove b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:54:06 np0005591760 systemd[1]: libpod-conmon-b4219d3adf046de19d180082ff72da44444d04fed799fb0eefa04079296b28ff.scope: Deactivated successfully.
Jan 22 04:54:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:54:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:54:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:06 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:54:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 122 KiB/s wr, 25 op/s
Jan 22 04:54:06 np0005591760 nova_compute[248045]: 2026-01-22 09:54:06.928 248049 DEBUG nova.network.neutron [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Successfully created port: 91280b50-6216-453e-ac85-e2ca5154e693 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078005990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:07.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:07.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:07.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:07.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:07 np0005591760 podman[255590]: 2026-01-22 09:54:07.06867181 +0000 UTC m=+0.059795498 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:07] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:54:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:07] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:54:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:07.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:08.082 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:08 np0005591760 nova_compute[248045]: 2026-01-22 09:54:08.327 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:54:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:08.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:54:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:08 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:08 np0005591760 nova_compute[248045]: 2026-01-22 09:54:08.865 248049 DEBUG nova.network.neutron [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Successfully updated port: 91280b50-6216-453e-ac85-e2ca5154e693 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 04:54:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:08.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 122 KiB/s wr, 25 op/s
Jan 22 04:54:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:08.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:08.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:08.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:08 np0005591760 nova_compute[248045]: 2026-01-22 09:54:08.880 248049 DEBUG oslo_concurrency.lockutils [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:54:08 np0005591760 nova_compute[248045]: 2026-01-22 09:54:08.881 248049 DEBUG oslo_concurrency.lockutils [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:54:08 np0005591760 nova_compute[248045]: 2026-01-22 09:54:08.881 248049 DEBUG nova.network.neutron [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 04:54:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:09 np0005591760 nova_compute[248045]: 2026-01-22 09:54:09.231 248049 DEBUG nova.compute.manager [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-changed-91280b50-6216-453e-ac85-e2ca5154e693 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:09 np0005591760 nova_compute[248045]: 2026-01-22 09:54:09.231 248049 DEBUG nova.compute.manager [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing instance network info cache due to event network-changed-91280b50-6216-453e-ac85-e2ca5154e693. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:54:09 np0005591760 nova_compute[248045]: 2026-01-22 09:54:09.231 248049 DEBUG oslo_concurrency.lockutils [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:54:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:10 np0005591760 nova_compute[248045]: 2026-01-22 09:54:10.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:10 np0005591760 nova_compute[248045]: 2026-01-22 09:54:10.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:10 np0005591760 nova_compute[248045]: 2026-01-22 09:54:10.304 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:10 np0005591760 nova_compute[248045]: 2026-01-22 09:54:10.319 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:10 np0005591760 nova_compute[248045]: 2026-01-22 09:54:10.319 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:10 np0005591760 nova_compute[248045]: 2026-01-22 09:54:10.320 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:54:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:54:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:10.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:54:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:10 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 124 KiB/s wr, 25 op/s
Jan 22 04:54:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.145 248049 DEBUG nova.network.neutron [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.163 248049 DEBUG oslo_concurrency.lockutils [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.164 248049 DEBUG oslo_concurrency.lockutils [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.165 248049 DEBUG nova.network.neutron [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing network info cache for port 91280b50-6216-453e-ac85-e2ca5154e693 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.167 248049 DEBUG nova.virt.libvirt.vif [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.168 248049 DEBUG nova.network.os_vif_util [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.168 248049 DEBUG nova.network.os_vif_util [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.168 248049 DEBUG os_vif [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.169 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.169 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.169 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.171 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.171 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91280b50-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.171 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91280b50-62, col_values=(('external_ids', {'iface-id': '91280b50-6216-453e-ac85-e2ca5154e693', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:14:7a', 'vm-uuid': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.1735] manager: (tap91280b50-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.172 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.176 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.178 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.178 248049 INFO os_vif [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62')#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.179 248049 DEBUG nova.virt.libvirt.vif [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.179 248049 DEBUG nova.network.os_vif_util [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.180 248049 DEBUG nova.network.os_vif_util [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.181 248049 DEBUG nova.virt.libvirt.guest [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] attach device xml: <interface type="ethernet">
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <mac address="fa:16:3e:b4:14:7a"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <model type="virtio"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <driver name="vhost" rx_queue_size="512"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <mtu size="1442"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <target dev="tap91280b50-62"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]: </interface>
Jan 22 04:54:11 np0005591760 nova_compute[248045]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 22 04:54:11 np0005591760 kernel: tap91280b50-62: entered promiscuous mode
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.189 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.1904] manager: (tap91280b50-62): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 22 04:54:11 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:11Z|00045|binding|INFO|Claiming lport 91280b50-6216-453e-ac85-e2ca5154e693 for this chassis.
Jan 22 04:54:11 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:11Z|00046|binding|INFO|91280b50-6216-453e-ac85-e2ca5154e693: Claiming fa:16:3e:b4:14:7a 10.100.0.23
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.198 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:14:7a 10.100.0.23'], port_security=['fa:16:3e:b4:14:7a 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36f82e25-219e-420f-acf7-94f16329ca95', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c495f361-059d-41ad-b945-183e74b3d9f6, chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=91280b50-6216-453e-ac85-e2ca5154e693) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.198 164103 INFO neutron.agent.ovn.metadata.agent [-] Port 91280b50-6216-453e-ac85-e2ca5154e693 in datapath 50dc3f83-26ba-4322-b50a-d6cc1ecbc08c bound to our chassis#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.200 164103 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50dc3f83-26ba-4322-b50a-d6cc1ecbc08c#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.207 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[0f35cb8f-d3b3-4ccb-9582-85ed8ff25bf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.208 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50dc3f83-21 in ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.210 253045 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50dc3f83-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.210 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[bc34ef03-0246-450c-b3d9-7e9787b5601f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.213 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cb87b0-e136-4f0b-9aae-8e53541dffd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 systemd-udevd[255625]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.225 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[adc525e2-beee-4dbc-b5ef-7ab0b82f0efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.2436] device (tap91280b50-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.2443] device (tap91280b50-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.250 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccc1a86-e3fd-4bf4-9d66-06137778703b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.247 248049 DEBUG nova.virt.libvirt.driver [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.248 248049 DEBUG nova.virt.libvirt.driver [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.248 248049 DEBUG nova.virt.libvirt.driver [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No VIF found with MAC fa:16:3e:37:cc:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.248 248049 DEBUG nova.virt.libvirt.driver [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No VIF found with MAC fa:16:3e:b4:14:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.250 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:11Z|00047|binding|INFO|Setting lport 91280b50-6216-453e-ac85-e2ca5154e693 ovn-installed in OVS
Jan 22 04:54:11 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:11Z|00048|binding|INFO|Setting lport 91280b50-6216-453e-ac85-e2ca5154e693 up in Southbound
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.253 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.273 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[5b28b3e5-111c-428e-928f-04a9a16cdbef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.274 248049 DEBUG nova.virt.libvirt.guest [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:11</nova:creationTime>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:11 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    <nova:port uuid="91280b50-6216-453e-ac85-e2ca5154e693">
Jan 22 04:54:11 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:11 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:11 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:11 np0005591760 nova_compute[248045]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 22 04:54:11 np0005591760 systemd-udevd[255628]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.2782] manager: (tap50dc3f83-20): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.277 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[fa54d279-42f9-4039-ad93-876c9d035e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.292 248049 DEBUG oslo_concurrency.lockutils [None req-209d0193-04ec-4f36-9351-1aab6f9e1786 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "interface-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.304 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[dccbd27c-4ed7-40d4-af91-11048b1a85ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.307 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[e85c3750-c292-4f68-9570-6db9880633e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.3221] device (tap50dc3f83-20): carrier: link connected
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.325 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[268d84c4-dfa5-441c-a0cd-44b942eba686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.338 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[01e723ee-3bb4-4ce8-a98d-552e5fce1f6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50dc3f83-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:54:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 323069, 'reachable_time': 18388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255642, 'error': None, 'target': 'ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.349 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[cc838064-0128-41c7-b62d-91aff1295faf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:54d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 323069, 'tstamp': 323069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255643, 'error': None, 'target': 'ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.361 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[e9043326-80bb-47ab-8113-67e1ad7132d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50dc3f83-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:54:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 323069, 'reachable_time': 18388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255644, 'error': None, 'target': 'ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.383 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[c02b0ba1-24c6-431a-a81e-435a3f842e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.421 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[e436d5ca-d879-4da7-8319-ee0611046f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.423 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50dc3f83-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.423 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.424 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50dc3f83-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:11 np0005591760 NetworkManager[48920]: <info>  [1769075651.4260] manager: (tap50dc3f83-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.425 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 kernel: tap50dc3f83-20: entered promiscuous mode
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.429 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.429 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50dc3f83-20, col_values=(('external_ids', {'iface-id': 'f58f6659-8a8c-4faa-b636-0c11cb8044e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:11 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:11Z|00049|binding|INFO|Releasing lport f58f6659-8a8c-4faa-b636-0c11cb8044e1 from this chassis (sb_readonly=0)
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.431 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.448 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.448 164103 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50dc3f83-26ba-4322-b50a-d6cc1ecbc08c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50dc3f83-26ba-4322-b50a-d6cc1ecbc08c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.449 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[e05a49e9-118c-483f-b00f-0b7186cdb75c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.450 164103 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: global
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    log         /dev/log local0 debug
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    log-tag     haproxy-metadata-proxy-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    user        root
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    group       root
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    maxconn     1024
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    pidfile     /var/lib/neutron/external/pids/50dc3f83-26ba-4322-b50a-d6cc1ecbc08c.pid.haproxy
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    daemon
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: defaults
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    log global
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    mode http
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    option httplog
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    option dontlognull
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    option http-server-close
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    option forwardfor
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    retries                 3
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    timeout http-request    30s
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    timeout connect         30s
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    timeout client          32s
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    timeout server          32s
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    timeout http-keep-alive 30s
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: listen listener
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    bind 169.254.169.254:80
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]:    http-request add-header X-OVN-Network-ID 50dc3f83-26ba-4322-b50a-d6cc1ecbc08c
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 04:54:11 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:11.450 164103 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'env', 'PROCESS_TAG=haproxy-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50dc3f83-26ba-4322-b50a-d6cc1ecbc08c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.569 248049 DEBUG nova.compute.manager [req-b114a378-ad29-463d-9c77-dd19c80c9a78 req-4e0356d8-b687-41d4-a447-1538c597dffc e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.569 248049 DEBUG oslo_concurrency.lockutils [req-b114a378-ad29-463d-9c77-dd19c80c9a78 req-4e0356d8-b687-41d4-a447-1538c597dffc e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.569 248049 DEBUG oslo_concurrency.lockutils [req-b114a378-ad29-463d-9c77-dd19c80c9a78 req-4e0356d8-b687-41d4-a447-1538c597dffc e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.570 248049 DEBUG oslo_concurrency.lockutils [req-b114a378-ad29-463d-9c77-dd19c80c9a78 req-4e0356d8-b687-41d4-a447-1538c597dffc e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.570 248049 DEBUG nova.compute.manager [req-b114a378-ad29-463d-9c77-dd19c80c9a78 req-4e0356d8-b687-41d4-a447-1538c597dffc e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:54:11 np0005591760 nova_compute[248045]: 2026-01-22 09:54:11.570 248049 WARNING nova.compute.manager [req-b114a378-ad29-463d-9c77-dd19c80c9a78 req-4e0356d8-b687-41d4-a447-1538c597dffc e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received unexpected event network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:54:11 np0005591760 podman[255673]: 2026-01-22 09:54:11.740217386 +0000 UTC m=+0.033459581 container create 97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 04:54:11 np0005591760 systemd[1]: Started libpod-conmon-97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab.scope.
Jan 22 04:54:11 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:54:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:11 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a70552b7508d2095566ddc8fbf899349c3148de1f461be8f4485a772266186c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 04:54:11 np0005591760 podman[255673]: 2026-01-22 09:54:11.799131612 +0000 UTC m=+0.092373827 container init 97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 04:54:11 np0005591760 podman[255673]: 2026-01-22 09:54:11.803792422 +0000 UTC m=+0.097034616 container start 97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 04:54:11 np0005591760 podman[255673]: 2026-01-22 09:54:11.724443058 +0000 UTC m=+0.017685273 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:54:11 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [NOTICE]   (255689) : New worker (255691) forked
Jan 22 04:54:11 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [NOTICE]   (255689) : Loading success.
Jan 22 04:54:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:11.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.172 248049 DEBUG nova.network.neutron [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updated VIF entry in instance network info cache for port 91280b50-6216-453e-ac85-e2ca5154e693. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.173 248049 DEBUG nova.network.neutron [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.192 248049 DEBUG oslo_concurrency.lockutils [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "interface-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-91280b50-6216-453e-ac85-e2ca5154e693" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.193 248049 DEBUG oslo_concurrency.lockutils [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "interface-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-91280b50-6216-453e-ac85-e2ca5154e693" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.196 248049 DEBUG oslo_concurrency.lockutils [req-61c5869e-0802-4674-976a-991d1f9df109 req-e08da06b-e620-4562-8d31-35a92a964e1c e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.206 248049 DEBUG nova.objects.instance [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'flavor' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.221 248049 DEBUG nova.virt.libvirt.vif [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.222 248049 DEBUG nova.network.os_vif_util [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.222 248049 DEBUG nova.network.os_vif_util [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.224 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.225 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.227 248049 DEBUG nova.virt.libvirt.driver [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Attempting to detach device tap91280b50-62 from instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.227 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] detach device xml: <interface type="ethernet">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <mac address="fa:16:3e:b4:14:7a"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <model type="virtio"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <driver name="vhost" rx_queue_size="512"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <mtu size="1442"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <target dev="tap91280b50-62"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </interface>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.230 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.232 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <name>instance-00000003</name>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <uuid>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</uuid>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:11</nova:creationTime>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:port uuid="91280b50-6216-453e-ac85-e2ca5154e693">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <memory unit='KiB'>131072</memory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <vcpu placement='static'>1</vcpu>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <resource>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <partition>/machine</partition>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </resource>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <sysinfo type='smbios'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='manufacturer'>RDO</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='product'>OpenStack Compute</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='serial'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='uuid'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='family'>Virtual Machine</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <boot dev='hd'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <smbios mode='sysinfo'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <vmcoreinfo state='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <cpu mode='custom' match='exact' check='full'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <vendor>AMD</vendor>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='x2apic'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc-deadline'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='hypervisor'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc_adjust'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='vaes'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='spec-ctrl'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='stibp'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='ssbd'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='cmp_legacy'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='overflow-recov'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='succor'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='virt-ssbd'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='lbrv'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='tsc-scale'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vmcb-clean'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='flushbyasid'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pause-filter'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pfthreshold'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='v-vmsave-vmload'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vgif'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svm'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='topoext'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='npt'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='nrip-save'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <clock offset='utc'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <timer name='pit' tickpolicy='delay'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <timer name='hpet' present='no'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <on_poweroff>destroy</on_poweroff>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <on_reboot>restart</on_reboot>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <on_crash>destroy</on_crash>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <disk type='network' device='disk'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk' index='2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='vda' bus='virtio'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='virtio-disk0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <disk type='network' device='cdrom'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config' index='1'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='sda' bus='sata'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <readonly/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='sata0-0-0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='0' model='pcie-root'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pcie.0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='1' port='0x10'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='2' port='0x11'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='3' port='0x12'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='4' port='0x13'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='5' port='0x14'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='6' port='0x15'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='7' port='0x16'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='8' port='0x17'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.8'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='9' port='0x18'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.9'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='10' port='0x19'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.10'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='11' port='0x1a'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.11'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='12' port='0x1b'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.12'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='13' port='0x1c'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.13'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='14' port='0x1d'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.14'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='15' port='0x1e'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.15'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='16' port='0x1f'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.16'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='17' port='0x20'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.17'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='18' port='0x21'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.18'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='19' port='0x22'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.19'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='20' port='0x23'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.20'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='21' port='0x24'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.21'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='22' port='0x25'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.22'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='23' port='0x26'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.23'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='24' port='0x27'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.24'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='25' port='0x28'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.25'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-pci-bridge'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.26'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='usb'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='sata' index='0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='ide'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <interface type='ethernet'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <mac address='fa:16:3e:37:cc:90'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='tapabfe2221-d5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model type='virtio'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='vhost' rx_queue_size='512'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <mtu size='1442'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='net0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <interface type='ethernet'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <mac address='fa:16:3e:b4:14:7a'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='tap91280b50-62'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model type='virtio'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='vhost' rx_queue_size='512'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <mtu size='1442'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='net1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <serial type='pty'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target type='isa-serial' port='0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <model name='isa-serial'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </target>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <console type='pty' tty='/dev/pts/0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target type='serial' port='0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <input type='tablet' bus='usb'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='input0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='usb' bus='0' port='1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <input type='mouse' bus='ps2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='input1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <input type='keyboard' bus='ps2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='input2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <listen type='address' address='::0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <audio id='1' type='none'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model type='virtio' heads='1' primary='yes'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='video0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <watchdog model='itco' action='reset'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='watchdog0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </watchdog>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <memballoon model='virtio'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <stats period='10'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='balloon0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <rng model='virtio'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <backend model='random'>/dev/urandom</backend>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='rng0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <label>system_u:system_r:svirt_t:s0:c885,c954</label>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c885,c954</imagelabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <label>+107:+107</label>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <imagelabel>+107:+107</imagelabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.233 248049 INFO nova.virt.libvirt.driver [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully detached device tap91280b50-62 from instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 from the persistent domain config.#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.233 248049 DEBUG nova.virt.libvirt.driver [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] (1/8): Attempting to detach device tap91280b50-62 with device alias net1 from instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.233 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] detach device xml: <interface type="ethernet">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <mac address="fa:16:3e:b4:14:7a"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <model type="virtio"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <driver name="vhost" rx_queue_size="512"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <mtu size="1442"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <target dev="tap91280b50-62"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </interface>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.315 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.316 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:54:12 np0005591760 kernel: tap91280b50-62 (unregistering): left promiscuous mode
Jan 22 04:54:12 np0005591760 NetworkManager[48920]: <info>  [1769075652.3303] device (tap91280b50-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 04:54:12 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:12Z|00050|binding|INFO|Releasing lport 91280b50-6216-453e-ac85-e2ca5154e693 from this chassis (sb_readonly=0)
Jan 22 04:54:12 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:12Z|00051|binding|INFO|Setting lport 91280b50-6216-453e-ac85-e2ca5154e693 down in Southbound
Jan 22 04:54:12 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:12Z|00052|binding|INFO|Removing iface tap91280b50-62 ovn-installed in OVS
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.339 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.341 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:14:7a 10.100.0.23'], port_security=['fa:16:3e:b4:14:7a 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36f82e25-219e-420f-acf7-94f16329ca95', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c495f361-059d-41ad-b945-183e74b3d9f6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=91280b50-6216-453e-ac85-e2ca5154e693) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.342 164103 INFO neutron.agent.ovn.metadata.agent [-] Port 91280b50-6216-453e-ac85-e2ca5154e693 in datapath 50dc3f83-26ba-4322-b50a-d6cc1ecbc08c unbound from our chassis#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.345 164103 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.346 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[53ee3332-ece3-40ec-ba35-cf2e63452df3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.347 164103 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c namespace which is not needed anymore#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.345 248049 DEBUG nova.virt.libvirt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Received event <DeviceRemovedEvent: 1769075652.3453336, d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.349 248049 DEBUG nova.virt.libvirt.driver [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Start waiting for the detach event from libvirt for device tap91280b50-62 with device alias net1 for instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.350 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.358 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <name>instance-00000003</name>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <uuid>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</uuid>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:11</nova:creationTime>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:port uuid="91280b50-6216-453e-ac85-e2ca5154e693">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <memory unit='KiB'>131072</memory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <vcpu placement='static'>1</vcpu>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <resource>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <partition>/machine</partition>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </resource>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <sysinfo type='smbios'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='manufacturer'>RDO</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='product'>OpenStack Compute</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='serial'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='uuid'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <entry name='family'>Virtual Machine</entry>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <boot dev='hd'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <smbios mode='sysinfo'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <vmcoreinfo state='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <cpu mode='custom' match='exact' check='full'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <vendor>AMD</vendor>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='x2apic'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc-deadline'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='hypervisor'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc_adjust'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='vaes'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='spec-ctrl'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='stibp'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='ssbd'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='cmp_legacy'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='overflow-recov'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='succor'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='virt-ssbd'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='lbrv'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='tsc-scale'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vmcb-clean'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='flushbyasid'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pause-filter'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pfthreshold'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='v-vmsave-vmload'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vgif'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svm'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='require' name='topoext'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='npt'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='nrip-save'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <clock offset='utc'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <timer name='pit' tickpolicy='delay'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <timer name='hpet' present='no'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <on_poweroff>destroy</on_poweroff>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <on_reboot>restart</on_reboot>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <on_crash>destroy</on_crash>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <disk type='network' device='disk'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk' index='2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='vda' bus='virtio'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='virtio-disk0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <disk type='network' device='cdrom'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config' index='1'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='sda' bus='sata'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <readonly/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='sata0-0-0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='0' model='pcie-root'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pcie.0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='1' port='0x10'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='2' port='0x11'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='3' port='0x12'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='4' port='0x13'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='5' port='0x14'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='6' port='0x15'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='7' port='0x16'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='8' port='0x17'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.8'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='9' port='0x18'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.9'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='10' port='0x19'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.10'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='11' port='0x1a'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.11'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='12' port='0x1b'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.12'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='13' port='0x1c'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.13'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='14' port='0x1d'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.14'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='15' port='0x1e'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.15'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='16' port='0x1f'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.16'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='17' port='0x20'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.17'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='18' port='0x21'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.18'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='19' port='0x22'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.19'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='20' port='0x23'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.20'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='21' port='0x24'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.21'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='22' port='0x25'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.22'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='23' port='0x26'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.23'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='24' port='0x27'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.24'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target chassis='25' port='0x28'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.25'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model name='pcie-pci-bridge'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='pci.26'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='usb'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <controller type='sata' index='0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='ide'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <interface type='ethernet'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <mac address='fa:16:3e:37:cc:90'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target dev='tapabfe2221-d5'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model type='virtio'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <driver name='vhost' rx_queue_size='512'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <mtu size='1442'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='net0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <serial type='pty'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target type='isa-serial' port='0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:        <model name='isa-serial'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      </target>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <console type='pty' tty='/dev/pts/0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <target type='serial' port='0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <input type='tablet' bus='usb'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='input0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='usb' bus='0' port='1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <input type='mouse' bus='ps2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='input1'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <input type='keyboard' bus='ps2'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='input2'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <listen type='address' address='::0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <audio id='1' type='none'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <model type='virtio' heads='1' primary='yes'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='video0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <watchdog model='itco' action='reset'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='watchdog0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </watchdog>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <memballoon model='virtio'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <stats period='10'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='balloon0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <rng model='virtio'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <backend model='random'>/dev/urandom</backend>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <alias name='rng0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <label>system_u:system_r:svirt_t:s0:c885,c954</label>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c885,c954</imagelabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <label>+107:+107</label>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <imagelabel>+107:+107</imagelabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.362 248049 INFO nova.virt.libvirt.driver [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully detached device tap91280b50-62 from instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 from the live domain config.#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.363 248049 DEBUG nova.virt.libvirt.vif [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.363 248049 DEBUG nova.network.os_vif_util [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.364 248049 DEBUG nova.network.os_vif_util [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.364 248049 DEBUG os_vif [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 04:54:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:12.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.366 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.367 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91280b50-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.367 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.369 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.369 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.371 248049 INFO os_vif [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62')#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.371 248049 DEBUG nova.virt.libvirt.guest [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:12</nova:creationTime>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:12 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:12 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:12 np0005591760 nova_compute[248045]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 22 04:54:12 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [NOTICE]   (255689) : haproxy version is 2.8.14-c23fe91
Jan 22 04:54:12 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [NOTICE]   (255689) : path to executable is /usr/sbin/haproxy
Jan 22 04:54:12 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [WARNING]  (255689) : Exiting Master process...
Jan 22 04:54:12 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [ALERT]    (255689) : Current worker (255691) exited with code 143 (Terminated)
Jan 22 04:54:12 np0005591760 neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c[255685]: [WARNING]  (255689) : All workers exited. Exiting... (0)
Jan 22 04:54:12 np0005591760 systemd[1]: libpod-97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab.scope: Deactivated successfully.
Jan 22 04:54:12 np0005591760 conmon[255685]: conmon 97cbb57390671bd7aef6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab.scope/container/memory.events
Jan 22 04:54:12 np0005591760 podman[255742]: 2026-01-22 09:54:12.462477535 +0000 UTC m=+0.040084835 container died 97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 04:54:12 np0005591760 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab-userdata-shm.mount: Deactivated successfully.
Jan 22 04:54:12 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3a70552b7508d2095566ddc8fbf899349c3148de1f461be8f4485a772266186c-merged.mount: Deactivated successfully.
Jan 22 04:54:12 np0005591760 podman[255742]: 2026-01-22 09:54:12.485553153 +0000 UTC m=+0.063160443 container cleanup 97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 04:54:12 np0005591760 systemd[1]: libpod-conmon-97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab.scope: Deactivated successfully.
Jan 22 04:54:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:12 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:12 np0005591760 podman[255781]: 2026-01-22 09:54:12.536887544 +0000 UTC m=+0.026289370 container remove 97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.541 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[9d85e73c-1ead-4b30-a8b9-42d1e9196582]: (4, ('Thu Jan 22 09:54:12 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c (97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab)\n97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab\nThu Jan 22 09:54:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c (97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab)\n97cbb57390671bd7aef60403ad6de3586f77844ffcf423438b47dba28b6d15ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.542 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[607fc671-7ad5-46a9-b901-940c40cd5734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.543 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50dc3f83-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.546 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:12 np0005591760 kernel: tap50dc3f83-20: left promiscuous mode
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.564 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.566 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[b2acd8cb-486e-4540-ba7b-f28d1a80f9c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.575 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[e732cfdc-083a-4366-be1d-317721805eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.576 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[39f248ed-c3d8-420b-aa46-6f04f9f97126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.588 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[80b0b7bb-7fa6-4ca6-8cc9-7ee5a79ec803]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 323064, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255793, 'error': None, 'target': 'ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 systemd[1]: run-netns-ovnmeta\x2d50dc3f83\x2d26ba\x2d4322\x2db50a\x2dd6cc1ecbc08c.mount: Deactivated successfully.
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.592 164492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50dc3f83-26ba-4322-b50a-d6cc1ecbc08c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 04:54:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:12.592 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[76741a26-3bbd-4928-9b33-195468fd72d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:54:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474731304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.708 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.752 248049 DEBUG nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.752 248049 DEBUG nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 04:54:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 17 KiB/s wr, 0 op/s
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.969 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.970 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4431MB free_disk=59.942726135253906GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.970 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:12 np0005591760 nova_compute[248045]: 2026-01-22 09:54:12.970 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.020 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.020 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.021 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:54:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.062 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.229 248049 DEBUG oslo_concurrency.lockutils [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.230 248049 DEBUG oslo_concurrency.lockutils [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.231 248049 DEBUG nova.network.neutron [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 04:54:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:54:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115242075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.433 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.436 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.446 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.458 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.459 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.575 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.575 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.575 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.575 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.575 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.575 248049 WARNING nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received unexpected event network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-unplugged-91280b50-6216-453e-ac85-e2ca5154e693 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-unplugged-91280b50-6216-453e-ac85-e2ca5154e693 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 WARNING nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received unexpected event network-vif-unplugged-91280b50-6216-453e-ac85-e2ca5154e693 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.576 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 DEBUG oslo_concurrency.lockutils [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 WARNING nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received unexpected event network-vif-plugged-91280b50-6216-453e-ac85-e2ca5154e693 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 DEBUG nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-deleted-91280b50-6216-453e-ac85-e2ca5154e693 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.577 248049 INFO nova.compute.manager [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Neutron deleted interface 91280b50-6216-453e-ac85-e2ca5154e693; detaching it from the instance and deleting it from the info cache#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.578 248049 DEBUG nova.network.neutron [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.613 248049 DEBUG nova.objects.instance [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lazy-loading 'system_metadata' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.658 248049 DEBUG nova.objects.instance [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lazy-loading 'flavor' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.697 248049 DEBUG nova.virt.libvirt.vif [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.697 248049 DEBUG nova.network.os_vif_util [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Converting VIF {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.698 248049 DEBUG nova.network.os_vif_util [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.700 248049 DEBUG nova.virt.libvirt.guest [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.702 248049 DEBUG nova.virt.libvirt.guest [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <name>instance-00000003</name>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <uuid>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</uuid>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:12</nova:creationTime>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <memory unit='KiB'>131072</memory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <vcpu placement='static'>1</vcpu>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <resource>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <partition>/machine</partition>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </resource>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <sysinfo type='smbios'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='manufacturer'>RDO</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='product'>OpenStack Compute</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='serial'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='uuid'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='family'>Virtual Machine</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <boot dev='hd'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <smbios mode='sysinfo'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <vmcoreinfo state='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <cpu mode='custom' match='exact' check='full'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <vendor>AMD</vendor>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='x2apic'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc-deadline'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='hypervisor'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc_adjust'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='vaes'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='spec-ctrl'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='stibp'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='ssbd'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='cmp_legacy'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='overflow-recov'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='succor'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='virt-ssbd'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='lbrv'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='tsc-scale'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vmcb-clean'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='flushbyasid'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pause-filter'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pfthreshold'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='v-vmsave-vmload'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vgif'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svm'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='topoext'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='npt'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='nrip-save'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <clock offset='utc'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <timer name='pit' tickpolicy='delay'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <timer name='hpet' present='no'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <on_poweroff>destroy</on_poweroff>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <on_reboot>restart</on_reboot>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <on_crash>destroy</on_crash>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <disk type='network' device='disk'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk' index='2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target dev='vda' bus='virtio'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='virtio-disk0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <disk type='network' device='cdrom'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config' index='1'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target dev='sda' bus='sata'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <readonly/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='sata0-0-0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='0' model='pcie-root'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pcie.0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='1' port='0x10'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='2' port='0x11'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='3' port='0x12'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='4' port='0x13'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='5' port='0x14'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='6' port='0x15'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='7' port='0x16'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='8' port='0x17'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.8'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='9' port='0x18'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.9'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='10' port='0x19'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.10'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='11' port='0x1a'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.11'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='12' port='0x1b'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.12'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='13' port='0x1c'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.13'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='14' port='0x1d'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.14'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='15' port='0x1e'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.15'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='16' port='0x1f'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.16'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='17' port='0x20'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.17'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='18' port='0x21'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.18'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='19' port='0x22'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.19'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='20' port='0x23'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.20'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='21' port='0x24'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.21'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='22' port='0x25'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.22'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='23' port='0x26'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.23'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='24' port='0x27'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.24'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='25' port='0x28'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.25'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-pci-bridge'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.26'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='usb'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='sata' index='0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='ide'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <interface type='ethernet'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <mac address='fa:16:3e:37:cc:90'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target dev='tapabfe2221-d5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model type='virtio'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <driver name='vhost' rx_queue_size='512'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <mtu size='1442'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='net0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <serial type='pty'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target type='isa-serial' port='0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <model name='isa-serial'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </target>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <console type='pty' tty='/dev/pts/0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target type='serial' port='0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <input type='tablet' bus='usb'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='input0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='usb' bus='0' port='1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <input type='mouse' bus='ps2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='input1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <input type='keyboard' bus='ps2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='input2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <listen type='address' address='::0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <audio id='1' type='none'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model type='virtio' heads='1' primary='yes'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='video0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <watchdog model='itco' action='reset'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='watchdog0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </watchdog>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <memballoon model='virtio'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <stats period='10'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='balloon0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <rng model='virtio'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <backend model='random'>/dev/urandom</backend>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='rng0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <label>system_u:system_r:svirt_t:s0:c885,c954</label>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c885,c954</imagelabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <label>+107:+107</label>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <imagelabel>+107:+107</imagelabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.702 248049 DEBUG nova.virt.libvirt.guest [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.706 248049 DEBUG nova.virt.libvirt.guest [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b4:14:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap91280b50-62"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <name>instance-00000003</name>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <uuid>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</uuid>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:12</nova:creationTime>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <memory unit='KiB'>131072</memory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <vcpu placement='static'>1</vcpu>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <resource>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <partition>/machine</partition>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </resource>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <sysinfo type='smbios'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='manufacturer'>RDO</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='product'>OpenStack Compute</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='serial'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='uuid'>d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <entry name='family'>Virtual Machine</entry>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <boot dev='hd'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <smbios mode='sysinfo'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <vmcoreinfo state='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <cpu mode='custom' match='exact' check='full'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <model fallback='forbid'>EPYC-Milan</model>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <vendor>AMD</vendor>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='x2apic'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc-deadline'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='hypervisor'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='tsc_adjust'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='vaes'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='vpclmulqdq'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='spec-ctrl'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='stibp'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='ssbd'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='cmp_legacy'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='overflow-recov'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='succor'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='virt-ssbd'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='lbrv'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='tsc-scale'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vmcb-clean'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='flushbyasid'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pause-filter'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='pfthreshold'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='v-vmsave-vmload'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='vgif'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='lfence-always-serializing'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svm'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='require' name='topoext'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='npt'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='nrip-save'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <feature policy='disable' name='svme-addr-chk'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <clock offset='utc'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <timer name='pit' tickpolicy='delay'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <timer name='hpet' present='no'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <on_poweroff>destroy</on_poweroff>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <on_reboot>restart</on_reboot>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <on_crash>destroy</on_crash>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <disk type='network' device='disk'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk' index='2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target dev='vda' bus='virtio'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='virtio-disk0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <disk type='network' device='cdrom'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <driver name='qemu' type='raw' cache='none'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <auth username='openstack'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <secret type='ceph' uuid='43df7a30-cf5f-5209-adfd-bf44298b19f2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source protocol='rbd' name='vms/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_disk.config' index='1'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.100' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.102' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <host name='192.168.122.101' port='6789'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target dev='sda' bus='sata'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <readonly/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='sata0-0-0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='0' model='pcie-root'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pcie.0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='1' port='0x10'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='2' port='0x11'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='3' port='0x12'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='4' port='0x13'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='5' port='0x14'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='6' port='0x15'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='7' port='0x16'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='8' port='0x17'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.8'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='9' port='0x18'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.9'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='10' port='0x19'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.10'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='11' port='0x1a'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.11'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='12' port='0x1b'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.12'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='13' port='0x1c'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.13'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='14' port='0x1d'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.14'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='15' port='0x1e'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.15'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='16' port='0x1f'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.16'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='17' port='0x20'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.17'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='18' port='0x21'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.18'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='19' port='0x22'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.19'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='20' port='0x23'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.20'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='21' port='0x24'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.21'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='22' port='0x25'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.22'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='23' port='0x26'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.23'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='24' port='0x27'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.24'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-root-port'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target chassis='25' port='0x28'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.25'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model name='pcie-pci-bridge'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='pci.26'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='usb'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <controller type='sata' index='0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='ide'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </controller>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <interface type='ethernet'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <mac address='fa:16:3e:37:cc:90'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target dev='tapabfe2221-d5'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model type='virtio'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <driver name='vhost' rx_queue_size='512'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <mtu size='1442'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='net0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <serial type='pty'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target type='isa-serial' port='0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:        <model name='isa-serial'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      </target>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <console type='pty' tty='/dev/pts/0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <source path='/dev/pts/0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <log file='/var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995/console.log' append='off'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <target type='serial' port='0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='serial0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </console>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <input type='tablet' bus='usb'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='input0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='usb' bus='0' port='1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <input type='mouse' bus='ps2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='input1'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <input type='keyboard' bus='ps2'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='input2'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </input>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <listen type='address' address='::0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </graphics>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <audio id='1' type='none'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <model type='virtio' heads='1' primary='yes'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='video0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <watchdog model='itco' action='reset'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='watchdog0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </watchdog>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <memballoon model='virtio'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <stats period='10'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='balloon0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <rng model='virtio'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <backend model='random'>/dev/urandom</backend>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <alias name='rng0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <label>system_u:system_r:svirt_t:s0:c885,c954</label>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c885,c954</imagelabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <label>+107:+107</label>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <imagelabel>+107:+107</imagelabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </seclabel>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.706 248049 WARNING nova.virt.libvirt.driver [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Detaching interface fa:16:3e:b4:14:7a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap91280b50-62' not found.#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.707 248049 DEBUG nova.virt.libvirt.vif [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.707 248049 DEBUG nova.network.os_vif_util [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Converting VIF {"id": "91280b50-6216-453e-ac85-e2ca5154e693", "address": "fa:16:3e:b4:14:7a", "network": {"id": "50dc3f83-26ba-4322-b50a-d6cc1ecbc08c", "bridge": "br-int", "label": "tempest-network-smoke--1819754744", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91280b50-62", "ovs_interfaceid": "91280b50-6216-453e-ac85-e2ca5154e693", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.707 248049 DEBUG nova.network.os_vif_util [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.708 248049 DEBUG os_vif [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.709 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.710 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91280b50-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.710 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.711 248049 INFO os_vif [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:14:7a,bridge_name='br-int',has_traffic_filtering=True,id=91280b50-6216-453e-ac85-e2ca5154e693,network=Network(50dc3f83-26ba-4322-b50a-d6cc1ecbc08c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91280b50-62')#033[00m
Jan 22 04:54:13 np0005591760 nova_compute[248045]: 2026-01-22 09:54:13.712 248049 DEBUG nova.virt.libvirt.guest [req-4606d9a8-6f00-44c7-bd8a-d33a114e9abe req-157bf8f8-3b8d-4c16-8588-c368ed41f675 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:name>tempest-TestNetworkBasicOps-server-1749369208</nova:name>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:creationTime>2026-01-22 09:54:13</nova:creationTime>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:flavor name="m1.nano">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:memory>128</nova:memory>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:disk>1</nova:disk>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:swap>0</nova:swap>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:vcpus>1</nova:vcpus>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:flavor>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:owner>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:owner>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  <nova:ports>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    <nova:port uuid="abfe2221-d53c-4fee-9063-5d7c426351e3">
Jan 22 04:54:13 np0005591760 nova_compute[248045]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:    </nova:port>
Jan 22 04:54:13 np0005591760 nova_compute[248045]:  </nova:ports>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: </nova:instance>
Jan 22 04:54:13 np0005591760 nova_compute[248045]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 22 04:54:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:13.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:14 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:14Z|00053|binding|INFO|Releasing lport f3d58566-a72c-48ab-8f65-195b3d643ba5 from this chassis (sb_readonly=0)
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.459 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.459 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.459 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.468 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:14 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d45c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.783 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.812 248049 INFO nova.network.neutron [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Port 91280b50-6216-453e-ac85-e2ca5154e693 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.813 248049 DEBUG nova.network.neutron [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.833 248049 DEBUG oslo_concurrency.lockutils [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.835 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.835 248049 DEBUG nova.network.neutron [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.835 248049 DEBUG nova.objects.instance [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lazy-loading 'info_cache' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:14 np0005591760 nova_compute[248045]: 2026-01-22 09:54:14.856 248049 DEBUG oslo_concurrency.lockutils [None req-abecfccd-4c31-4084-8b83-c7eab4a34b85 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "interface-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-91280b50-6216-453e-ac85-e2ca5154e693" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 0 op/s
Jan 22 04:54:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.305 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.472 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.472 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.472 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.472 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.472 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.473 248049 INFO nova.compute.manager [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Terminating instance#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.474 248049 DEBUG nova.compute.manager [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 04:54:15 np0005591760 kernel: tapabfe2221-d5 (unregistering): left promiscuous mode
Jan 22 04:54:15 np0005591760 NetworkManager[48920]: <info>  [1769075655.5099] device (tapabfe2221-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 04:54:15 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:15Z|00054|binding|INFO|Releasing lport abfe2221-d53c-4fee-9063-5d7c426351e3 from this chassis (sb_readonly=0)
Jan 22 04:54:15 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:15Z|00055|binding|INFO|Setting lport abfe2221-d53c-4fee-9063-5d7c426351e3 down in Southbound
Jan 22 04:54:15 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:15Z|00056|binding|INFO|Removing iface tapabfe2221-d5 ovn-installed in OVS
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.517 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.525 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:cc:90 10.100.0.8'], port_security=['fa:16:3e:37:cc:90 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '063acbef-2bd2-4b9b-b641-bc7c62945e62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be33538f-4886-46cf-b41d-06835b51122f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=abfe2221-d53c-4fee-9063-5d7c426351e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.526 164103 INFO neutron.agent.ovn.metadata.agent [-] Port abfe2221-d53c-4fee-9063-5d7c426351e3 in datapath a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 unbound from our chassis#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.527 164103 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.528 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[15da6f82-3e06-4909-b5ad-8056f7659e81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.528 164103 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 namespace which is not needed anymore#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.542 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 22 04:54:15 np0005591760 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 11.534s CPU time.
Jan 22 04:54:15 np0005591760 systemd-machined[216371]: Machine qemu-2-instance-00000003 terminated.
Jan 22 04:54:15 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [NOTICE]   (254806) : haproxy version is 2.8.14-c23fe91
Jan 22 04:54:15 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [NOTICE]   (254806) : path to executable is /usr/sbin/haproxy
Jan 22 04:54:15 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [WARNING]  (254806) : Exiting Master process...
Jan 22 04:54:15 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [ALERT]    (254806) : Current worker (254808) exited with code 143 (Terminated)
Jan 22 04:54:15 np0005591760 neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8[254802]: [WARNING]  (254806) : All workers exited. Exiting... (0)
Jan 22 04:54:15 np0005591760 systemd[1]: libpod-03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00.scope: Deactivated successfully.
Jan 22 04:54:15 np0005591760 podman[255844]: 2026-01-22 09:54:15.629745878 +0000 UTC m=+0.034528858 container died 03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 04:54:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00-userdata-shm.mount: Deactivated successfully.
Jan 22 04:54:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4a3d55c022b22cda2078d5675258d435e2ce92ef8da17c45f5b343bff2145dd7-merged.mount: Deactivated successfully.
Jan 22 04:54:15 np0005591760 podman[255844]: 2026-01-22 09:54:15.64875088 +0000 UTC m=+0.053533860 container cleanup 03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 04:54:15 np0005591760 systemd[1]: libpod-conmon-03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00.scope: Deactivated successfully.
Jan 22 04:54:15 np0005591760 kernel: tapabfe2221-d5: entered promiscuous mode
Jan 22 04:54:15 np0005591760 NetworkManager[48920]: <info>  [1769075655.6853] manager: (tapabfe2221-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Jan 22 04:54:15 np0005591760 kernel: tapabfe2221-d5 (unregistering): left promiscuous mode
Jan 22 04:54:15 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:15Z|00057|binding|INFO|Claiming lport abfe2221-d53c-4fee-9063-5d7c426351e3 for this chassis.
Jan 22 04:54:15 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:15Z|00058|binding|INFO|abfe2221-d53c-4fee-9063-5d7c426351e3: Claiming fa:16:3e:37:cc:90 10.100.0.8
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.689 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 podman[255868]: 2026-01-22 09:54:15.694738517 +0000 UTC m=+0.030302689 container remove 03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.697 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:cc:90 10.100.0.8'], port_security=['fa:16:3e:37:cc:90 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '063acbef-2bd2-4b9b-b641-bc7c62945e62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be33538f-4886-46cf-b41d-06835b51122f, chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=abfe2221-d53c-4fee-9063-5d7c426351e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.701 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[b72d5266-deeb-47f0-9b88-d720a304b219]: (4, ('Thu Jan 22 09:54:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 (03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00)\n03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00\nThu Jan 22 09:54:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 (03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00)\n03bc2d6e14209c5d640341fb6b7f6931380fa06e26a6246b0ee7a9b4a102ad00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.702 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[26b82ae7-f8d6-4ace-b60b-7a442c5690e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.703 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa68d0b24-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.704 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.706 248049 INFO nova.virt.libvirt.driver [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Instance destroyed successfully.#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.706 248049 DEBUG nova.objects.instance [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'resources' on Instance uuid d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.715 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.724 248049 DEBUG nova.virt.libvirt.vif [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1749369208',display_name='tempest-TestNetworkBasicOps-server-1749369208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1749369208',id=3,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxd8JIG0RPWf626mvreziWH2yiDFb0P2k8/NmwFLa/mbiOATVrdIbT3XTswnWNCIKRjsuwY+qEKA7uFd5Fwl2pMhqN8nk75u2lj0iA6TzMzRcmFVjay/UWg9wRlHSd1NQ==',key_name='tempest-TestNetworkBasicOps-1639780854',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:53:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-djmzrb11',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:53:45Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.724 248049 DEBUG nova.network.os_vif_util [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.725 248049 DEBUG nova.network.os_vif_util [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.725 248049 DEBUG os_vif [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.726 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.726 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabfe2221-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:54:15 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:15Z|00059|binding|INFO|Releasing lport abfe2221-d53c-4fee-9063-5d7c426351e3 from this chassis (sb_readonly=0)
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.727 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.729 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.731 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.731 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:cc:90 10.100.0.8'], port_security=['fa:16:3e:37:cc:90 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '063acbef-2bd2-4b9b-b641-bc7c62945e62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be33538f-4886-46cf-b41d-06835b51122f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=abfe2221-d53c-4fee-9063-5d7c426351e3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.738 248049 DEBUG nova.compute.manager [req-41dbbd16-8eaa-46c4-81f9-472cae59f367 req-78dc0a9a-9054-48d5-85ed-a09759cfa9bf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-unplugged-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.738 248049 DEBUG oslo_concurrency.lockutils [req-41dbbd16-8eaa-46c4-81f9-472cae59f367 req-78dc0a9a-9054-48d5-85ed-a09759cfa9bf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.738 248049 DEBUG oslo_concurrency.lockutils [req-41dbbd16-8eaa-46c4-81f9-472cae59f367 req-78dc0a9a-9054-48d5-85ed-a09759cfa9bf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.739 248049 DEBUG oslo_concurrency.lockutils [req-41dbbd16-8eaa-46c4-81f9-472cae59f367 req-78dc0a9a-9054-48d5-85ed-a09759cfa9bf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.739 248049 DEBUG nova.compute.manager [req-41dbbd16-8eaa-46c4-81f9-472cae59f367 req-78dc0a9a-9054-48d5-85ed-a09759cfa9bf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-unplugged-abfe2221-d53c-4fee-9063-5d7c426351e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.739 248049 DEBUG nova.compute.manager [req-41dbbd16-8eaa-46c4-81f9-472cae59f367 req-78dc0a9a-9054-48d5-85ed-a09759cfa9bf e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-unplugged-abfe2221-d53c-4fee-9063-5d7c426351e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 04:54:15 np0005591760 kernel: tapa68d0b24-a0: left promiscuous mode
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.747 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.748 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.750 248049 INFO os_vif [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:cc:90,bridge_name='br-int',has_traffic_filtering=True,id=abfe2221-d53c-4fee-9063-5d7c426351e3,network=Network(a68d0b24-a42d-487a-87cc-f5ecce0ddcc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabfe2221-d5')#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.751 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[20aa09b2-695d-42dc-bd10-fa9f65d9e278]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.758 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[b077dbc9-6dcd-4d4c-b5f6-0bcd4e7cc55c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.759 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[5554a60a-65bd-4536-8ba7-e06e9818f2e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.772 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[a023a447-06a4-4d98-9632-96a9cac85458]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 320336, 'reachable_time': 31881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255897, 'error': None, 'target': 'ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 systemd[1]: run-netns-ovnmeta\x2da68d0b24\x2da42d\x2d487a\x2d87cc\x2df5ecce0ddcc8.mount: Deactivated successfully.
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.775 164492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.775 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd7e94b-96e2-402c-83a3-4077f7da83ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.776 164103 INFO neutron.agent.ovn.metadata.agent [-] Port abfe2221-d53c-4fee-9063-5d7c426351e3 in datapath a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 unbound from our chassis#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.777 164103 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.777 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[454b0dd0-eadc-4370-942b-f10754f5f049]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.778 164103 INFO neutron.agent.ovn.metadata.agent [-] Port abfe2221-d53c-4fee-9063-5d7c426351e3 in datapath a68d0b24-a42d-487a-87cc-f5ecce0ddcc8 unbound from our chassis#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.778 164103 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a68d0b24-a42d-487a-87cc-f5ecce0ddcc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 04:54:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:15.779 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[4b744ac3-dd4e-40e5-8559-ae1284f6f389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.787 248049 DEBUG nova.compute.manager [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-changed-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.787 248049 DEBUG nova.compute.manager [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing instance network info cache due to event network-changed-abfe2221-d53c-4fee-9063-5d7c426351e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.787 248049 DEBUG oslo_concurrency.lockutils [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:54:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:54:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:15.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.922 248049 INFO nova.virt.libvirt.driver [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Deleting instance files /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_del#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.922 248049 INFO nova.virt.libvirt.driver [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Deletion of /var/lib/nova/instances/d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995_del complete#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.989 248049 INFO nova.compute.manager [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.989 248049 DEBUG oslo.service.loopingcall [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.990 248049 DEBUG nova.compute.manager [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 04:54:15 np0005591760 nova_compute[248045]: 2026-01-22 09:54:15.990 248049 DEBUG nova.network.neutron [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.232 248049 DEBUG nova.network.neutron [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.257 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.258 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.258 248049 DEBUG oslo_concurrency.lockutils [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.258 248049 DEBUG nova.network.neutron [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Refreshing network info cache for port abfe2221-d53c-4fee-9063-5d7c426351e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.259 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:54:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:16.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:16 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.623 248049 DEBUG nova.network.neutron [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.640 248049 INFO nova.compute.manager [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Took 0.65 seconds to deallocate network for instance.#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.675 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.675 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:54:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2879046996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:54:16 np0005591760 nova_compute[248045]: 2026-01-22 09:54:16.710 248049 DEBUG oslo_concurrency.processutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:54:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 19 KiB/s wr, 1 op/s
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d4ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:17.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:17.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:17.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:17.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:54:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639590329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.064 248049 DEBUG oslo_concurrency.processutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.068 248049 DEBUG nova.compute.provider_tree [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.083 248049 DEBUG nova.scheduler.client.report [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.102 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.138 248049 INFO nova.scheduler.client.report [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Deleted allocations for instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.172 248049 DEBUG nova.network.neutron [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updated VIF entry in instance network info cache for port abfe2221-d53c-4fee-9063-5d7c426351e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.172 248049 DEBUG nova.network.neutron [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Updating instance_info_cache with network_info: [{"id": "abfe2221-d53c-4fee-9063-5d7c426351e3", "address": "fa:16:3e:37:cc:90", "network": {"id": "a68d0b24-a42d-487a-87cc-f5ecce0ddcc8", "bridge": "br-int", "label": "tempest-network-smoke--1447409561", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabfe2221-d5", "ovs_interfaceid": "abfe2221-d53c-4fee-9063-5d7c426351e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.199 248049 DEBUG oslo_concurrency.lockutils [None req-1cf6124a-1118-4aec-878c-fbe5bb1f007b 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.201 248049 DEBUG oslo_concurrency.lockutils [req-a0908a1d-8cf2-4f8f-bc46-3d6a5a340fbb req-8105fa83-a2dd-4083-89f4-768bc434031b e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:17] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:54:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:17] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:54:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.810 248049 DEBUG nova.compute.manager [req-a00038c1-113a-41a7-939a-f66d98c7c404 req-5b0f3bce-ee5e-43c9-bf3b-7f5f80c6d77f e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.811 248049 DEBUG oslo_concurrency.lockutils [req-a00038c1-113a-41a7-939a-f66d98c7c404 req-5b0f3bce-ee5e-43c9-bf3b-7f5f80c6d77f e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.811 248049 DEBUG oslo_concurrency.lockutils [req-a00038c1-113a-41a7-939a-f66d98c7c404 req-5b0f3bce-ee5e-43c9-bf3b-7f5f80c6d77f e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.811 248049 DEBUG oslo_concurrency.lockutils [req-a00038c1-113a-41a7-939a-f66d98c7c404 req-5b0f3bce-ee5e-43c9-bf3b-7f5f80c6d77f e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.811 248049 DEBUG nova.compute.manager [req-a00038c1-113a-41a7-939a-f66d98c7c404 req-5b0f3bce-ee5e-43c9-bf3b-7f5f80c6d77f e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] No waiting events found dispatching network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.811 248049 WARNING nova.compute.manager [req-a00038c1-113a-41a7-939a-f66d98c7c404 req-5b0f3bce-ee5e-43c9-bf3b-7f5f80c6d77f e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received unexpected event network-vif-plugged-abfe2221-d53c-4fee-9063-5d7c426351e3 for instance with vm_state deleted and task_state None.#033[00m
Jan 22 04:54:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:17.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.849 248049 DEBUG nova.compute.manager [req-5426a1f3-024a-4d36-b3ac-82d5ea924e26 req-9405af8c-e827-4ae4-aeba-7cba31a3c082 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Received event network-vif-deleted-abfe2221-d53c-4fee-9063-5d7c426351e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.849 248049 INFO nova.compute.manager [req-5426a1f3-024a-4d36-b3ac-82d5ea924e26 req-9405af8c-e827-4ae4-aeba-7cba31a3c082 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Neutron deleted interface abfe2221-d53c-4fee-9063-5d7c426351e3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.849 248049 DEBUG nova.network.neutron [req-5426a1f3-024a-4d36-b3ac-82d5ea924e26 req-9405af8c-e827-4ae4-aeba-7cba31a3c082 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 22 04:54:17 np0005591760 nova_compute[248045]: 2026-01-22 09:54:17.850 248049 DEBUG nova.compute.manager [req-5426a1f3-024a-4d36-b3ac-82d5ea924e26 req-9405af8c-e827-4ae4-aeba-7cba31a3c082 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Detach interface failed, port_id=abfe2221-d53c-4fee-9063-5d7c426351e3, reason: Instance d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 22 04:54:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:18.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:18 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:18.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:18 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 6.3 KiB/s wr, 0 op/s
Jan 22 04:54:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:18.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:18.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:18.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:54:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:54:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:54:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:54:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:54:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:54:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55df1e9d4ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:19.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:20 np0005591760 nova_compute[248045]: 2026-01-22 09:54:20.307 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:20.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:20 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:20 np0005591760 nova_compute[248045]: 2026-01-22 09:54:20.727 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:20 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Jan 22 04:54:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:21 np0005591760 nova_compute[248045]: 2026-01-22 09:54:21.932 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:22 np0005591760 nova_compute[248045]: 2026-01-22 09:54:22.041 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:22.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:22 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:22 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 22 04:54:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:23.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:24.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:24 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084001bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:24 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 22 04:54:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:25 np0005591760 nova_compute[248045]: 2026-01-22 09:54:25.309 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:25 np0005591760 nova_compute[248045]: 2026-01-22 09:54:25.728 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:25.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:26.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:26 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084001bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:27.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:27.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:27.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:27.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 04:54:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 04:54:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:27.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:28 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:28.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:28.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 04:54:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084001bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:29.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:30 np0005591760 nova_compute[248045]: 2026-01-22 09:54:30.310 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:30.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:30 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:30 np0005591760 nova_compute[248045]: 2026-01-22 09:54:30.701 248049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769075655.7004645, d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:54:30 np0005591760 nova_compute[248045]: 2026-01-22 09:54:30.702 248049 INFO nova.compute.manager [-] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] VM Stopped (Lifecycle Event)#033[00m
Jan 22 04:54:30 np0005591760 nova_compute[248045]: 2026-01-22 09:54:30.715 248049 DEBUG nova.compute.manager [None req-f8ad723e-00a4-4e1a-8a59-c6ec5764c268 - - - - - -] [instance: d967bcf1-04ac-4eaf-8e7d-0f0a0d8b1995] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:54:30 np0005591760 nova_compute[248045]: 2026-01-22 09:54:30.729 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 22 04:54:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:31.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:32 np0005591760 podman[255974]: 2026-01-22 09:54:32.364408868 +0000 UTC m=+0.046736922 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 04:54:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084003030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:54:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084003030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:33.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:34 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 22 04:54:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:35 np0005591760 nova_compute[248045]: 2026-01-22 09:54:35.311 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:35 np0005591760 nova_compute[248045]: 2026-01-22 09:54:35.730 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c005d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:35.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:54:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:36.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:54:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:37.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:37.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:37.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:37] "GET /metrics HTTP/1.1" 200 48585 "" "Prometheus/2.51.0"
Jan 22 04:54:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:37] "GET /metrics HTTP/1.1" 200 48585 "" "Prometheus/2.51.0"
Jan 22 04:54:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:37.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:38 np0005591760 podman[255998]: 2026-01-22 09:54:38.06288595 +0000 UTC m=+0.055313745 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 04:54:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095438 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:54:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:38 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:38.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:38.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:38.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:38.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Jan 22 04:54:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb084003030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:39.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:40 np0005591760 nova_compute[248045]: 2026-01-22 09:54:40.313 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:40 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:40 np0005591760 nova_compute[248045]: 2026-01-22 09:54:40.730 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s
Jan 22 04:54:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:41.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 04:54:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:44.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:44 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 04:54:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780066a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:45 np0005591760 nova_compute[248045]: 2026-01-22 09:54:45.314 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:45 np0005591760 nova_compute[248045]: 2026-01-22 09:54:45.731 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:45.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:46 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:54:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:46 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:47.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:47.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:47.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:47.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:47.312 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:54:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:47.312 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:54:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:54:47.312 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:47] "GET /metrics HTTP/1.1" 200 48585 "" "Prometheus/2.51.0"
Jan 22 04:54:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:47] "GET /metrics HTTP/1.1" 200 48585 "" "Prometheus/2.51.0"
Jan 22 04:54:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800b940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:54:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:47.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:54:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:48.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:48.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:48.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:48.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:48.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 22 04:54:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:54:49
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.nfs', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'volumes', 'vms']
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:54:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:54:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:54:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:54:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:49.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:50 np0005591760 nova_compute[248045]: 2026-01-22 09:54:50.314 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:50.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:50 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a8c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:50 np0005591760 nova_compute[248045]: 2026-01-22 09:54:50.732 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 100 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Jan 22 04:54:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a8c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:51.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:54:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 100 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 143 op/s
Jan 22 04:54:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:54 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 100 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 143 op/s
Jan 22 04:54:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:55 np0005591760 nova_compute[248045]: 2026-01-22 09:54:55.317 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:55 np0005591760 nova_compute[248045]: 2026-01-22 09:54:55.734 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:54:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:55.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:56.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:56 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:57.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:57.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:57.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:57.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:57] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:54:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:54:57] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:54:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb088009f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:54:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:54:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:54:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:54:58.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:54:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:54:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:58 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095458 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:54:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:58.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:54:58.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:54:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 22 04:54:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:59 np0005591760 ovn_controller[154073]: 2026-01-22T09:54:59Z|00060|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:54:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:54:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:54:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:54:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:54:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:54:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:54:59.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:00 np0005591760 nova_compute[248045]: 2026-01-22 09:55:00.320 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:00.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:00 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:00 np0005591760 nova_compute[248045]: 2026-01-22 09:55:00.735 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 22 04:55:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:01.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:02.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 262 KiB/s rd, 854 KiB/s wr, 44 op/s
Jan 22 04:55:03 np0005591760 podman[256076]: 2026-01-22 09:55:03.047582574 +0000 UTC m=+0.038224568 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 04:55:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a8c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:03.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.096 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.097 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.107 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.154 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.155 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.158 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.159 248049 INFO nova.compute.claims [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.221 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:04.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:04 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:55:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540396914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.581 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.585 248049 DEBUG nova.compute.provider_tree [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.595 248049 DEBUG nova.scheduler.client.report [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.607 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.608 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.636 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.637 248049 DEBUG nova.network.neutron [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.648 248049 INFO nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.657 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.710 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.711 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.711 248049 INFO nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Creating image(s)#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.730 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.747 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.765 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.767 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.781 248049 DEBUG nova.policy [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4428dd9b0fb64c25b8f33b0050d4ef6f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.824 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.824 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "9db187949728ea707722fd244d769f131efa8688" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.825 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "9db187949728ea707722fd244d769f131efa8688" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.826 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "9db187949728ea707722fd244d769f131efa8688" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.845 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.848 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 262 KiB/s rd, 854 KiB/s wr, 44 op/s
Jan 22 04:55:04 np0005591760 nova_compute[248045]: 2026-01-22 09:55:04.971 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9db187949728ea707722fd244d769f131efa8688 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.011 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] resizing rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 04:55:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.062 248049 DEBUG nova.objects.instance [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f83d437-b6a8-434e-bb56-a982f6e9fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.074 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.074 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Ensure instance console log exists: /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.074 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.075 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.075 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.179 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:05 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:05.179 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:55:05 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:05.181 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.255 248049 DEBUG nova.network.neutron [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Successfully created port: da625a0a-7c45-44cc-bc96-31af0ab5f145 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.321 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.736 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.870 248049 DEBUG nova.network.neutron [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Successfully updated port: da625a0a-7c45-44cc-bc96-31af0ab5f145 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.881 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.881 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquired lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.881 248049 DEBUG nova.network.neutron [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 04:55:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:05.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.957 248049 DEBUG nova.compute.manager [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-changed-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.957 248049 DEBUG nova.compute.manager [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Refreshing instance network info cache due to event network-changed-da625a0a-7c45-44cc-bc96-31af0ab5f145. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:55:05 np0005591760 nova_compute[248045]: 2026-01-22 09:55:05.957 248049 DEBUG oslo_concurrency.lockutils [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.003 248049 DEBUG nova.network.neutron [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 04:55:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:06.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.449 248049 DEBUG nova.network.neutron [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updating instance_info_cache with network_info: [{"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.461 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Releasing lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.462 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Instance network_info: |[{"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.462 248049 DEBUG oslo_concurrency.lockutils [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.462 248049 DEBUG nova.network.neutron [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Refreshing network info cache for port da625a0a-7c45-44cc-bc96-31af0ab5f145 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.464 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Start _get_guest_xml network_info=[{"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T09:51:33Z,direct_url=<?>,disk_format='qcow2',id=bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a894ac5b4f744f208fa506d5e8f67970',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T09:51:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'encryption_format': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'encryption_options': None, 'image_id': 'bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.467 248049 WARNING nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.473 248049 DEBUG nova.virt.libvirt.host [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.474 248049 DEBUG nova.virt.libvirt.host [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.476 248049 DEBUG nova.virt.libvirt.host [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.477 248049 DEBUG nova.virt.libvirt.host [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.477 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.477 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T09:51:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6eff66ba-fb3e-4ca7-b05b-920b01d9affd',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T09:51:33Z,direct_url=<?>,disk_format='qcow2',id=bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a894ac5b4f744f208fa506d5e8f67970',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T09:51:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.478 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.478 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.478 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.478 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.478 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.478 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.479 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.479 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.479 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.479 248049 DEBUG nova.virt.hardware [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.481 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:06 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:55:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711870795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.834 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.853 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:06 np0005591760 nova_compute[248045]: 2026-01-22 09:55:06.856 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.6 MiB/s wr, 147 op/s
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:07.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 2.1 MiB/s wr, 122 op/s
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859712145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.247 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.249 248049 DEBUG nova.virt.libvirt.vif [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T09:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-779915831',display_name='tempest-TestNetworkBasicOps-server-779915831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-779915831',id=5,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG5hZ2SSN9qQ+kkgmxWlbRd4sPfjlE083NoFOfh7oQGcHaMZfssTRROyFb/ADgbinP/yrXxAEyHFmiWSiPhZKwsWabVnEfaQ0pCP9lA/btSHB3hIICOVyzi0KxylceZGyA==',key_name='tempest-TestNetworkBasicOps-1348098295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-dlzm9rtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T09:55:04Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=6f83d437-b6a8-434e-bb56-a982f6e9fc56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.249 248049 DEBUG nova.network.os_vif_util [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.250 248049 DEBUG nova.network.os_vif_util [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.251 248049 DEBUG nova.objects.instance [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f83d437-b6a8-434e-bb56-a982f6e9fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.257 248049 DEBUG nova.network.neutron [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updated VIF entry in instance network info cache for port da625a0a-7c45-44cc-bc96-31af0ab5f145. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.258 248049 DEBUG nova.network.neutron [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updating instance_info_cache with network_info: [{"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.262 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] End _get_guest_xml xml=<domain type="kvm">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <uuid>6f83d437-b6a8-434e-bb56-a982f6e9fc56</uuid>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <name>instance-00000005</name>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <memory>131072</memory>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <vcpu>1</vcpu>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <metadata>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:name>tempest-TestNetworkBasicOps-server-779915831</nova:name>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:creationTime>2026-01-22 09:55:06</nova:creationTime>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:flavor name="m1.nano">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:memory>128</nova:memory>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:disk>1</nova:disk>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:swap>0</nova:swap>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:vcpus>1</nova:vcpus>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </nova:flavor>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:owner>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:user uuid="4428dd9b0fb64c25b8f33b0050d4ef6f">tempest-TestNetworkBasicOps-349110285-project-member</nova:user>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:project uuid="05af97dae0f4449ba7eb640bcd3f61e6">tempest-TestNetworkBasicOps-349110285</nova:project>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </nova:owner>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:root type="image" uuid="bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <nova:ports>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <nova:port uuid="da625a0a-7c45-44cc-bc96-31af0ab5f145">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        </nova:port>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </nova:ports>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </nova:instance>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </metadata>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <sysinfo type="smbios">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <system>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <entry name="manufacturer">RDO</entry>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <entry name="product">OpenStack Compute</entry>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <entry name="serial">6f83d437-b6a8-434e-bb56-a982f6e9fc56</entry>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <entry name="uuid">6f83d437-b6a8-434e-bb56-a982f6e9fc56</entry>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <entry name="family">Virtual Machine</entry>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </system>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </sysinfo>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <os>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <boot dev="hd"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <smbios mode="sysinfo"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </os>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <features>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <acpi/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <apic/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <vmcoreinfo/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </features>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <clock offset="utc">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <timer name="hpet" present="no"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </clock>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <cpu mode="host-model" match="exact">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </cpu>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  <devices>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <disk type="network" device="disk">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <driver type="raw" cache="none"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <source protocol="rbd" name="vms/6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <host name="192.168.122.100" port="6789"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <host name="192.168.122.102" port="6789"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <host name="192.168.122.101" port="6789"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <auth username="openstack">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <secret type="ceph" uuid="43df7a30-cf5f-5209-adfd-bf44298b19f2"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <target dev="vda" bus="virtio"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <disk type="network" device="cdrom">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <driver type="raw" cache="none"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <source protocol="rbd" name="vms/6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk.config">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <host name="192.168.122.100" port="6789"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <host name="192.168.122.102" port="6789"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <host name="192.168.122.101" port="6789"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </source>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <auth username="openstack">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:        <secret type="ceph" uuid="43df7a30-cf5f-5209-adfd-bf44298b19f2"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      </auth>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <target dev="sda" bus="sata"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </disk>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <interface type="ethernet">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <mac address="fa:16:3e:4a:47:4a"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <model type="virtio"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <mtu size="1442"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <target dev="tapda625a0a-7c"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </interface>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <serial type="pty">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <log file="/var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/console.log" append="off"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </serial>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <video>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <model type="virtio"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </video>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <input type="tablet" bus="usb"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <rng model="virtio">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <backend model="random">/dev/urandom</backend>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </rng>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <controller type="usb" index="0"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    <memballoon model="virtio">
Jan 22 04:55:07 np0005591760 nova_compute[248045]:      <stats period="10"/>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:    </memballoon>
Jan 22 04:55:07 np0005591760 nova_compute[248045]:  </devices>
Jan 22 04:55:07 np0005591760 nova_compute[248045]: </domain>
Jan 22 04:55:07 np0005591760 nova_compute[248045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.264 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Preparing to wait for external event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.264 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.264 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.264 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.265 248049 DEBUG nova.virt.libvirt.vif [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T09:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-779915831',display_name='tempest-TestNetworkBasicOps-server-779915831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-779915831',id=5,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG5hZ2SSN9qQ+kkgmxWlbRd4sPfjlE083NoFOfh7oQGcHaMZfssTRROyFb/ADgbinP/yrXxAEyHFmiWSiPhZKwsWabVnEfaQ0pCP9lA/btSHB3hIICOVyzi0KxylceZGyA==',key_name='tempest-TestNetworkBasicOps-1348098295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-dlzm9rtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T09:55:04Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=6f83d437-b6a8-434e-bb56-a982f6e9fc56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.265 248049 DEBUG nova.network.os_vif_util [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.266 248049 DEBUG nova.network.os_vif_util [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.266 248049 DEBUG os_vif [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.266 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.267 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.267 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.268 248049 DEBUG oslo_concurrency.lockutils [req-781d4361-036a-4ec7-a603-604f6e53dde3 req-10470908-15ce-4810-962d-cc0629798ed8 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.269 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.270 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda625a0a-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.270 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda625a0a-7c, col_values=(('external_ids', {'iface-id': 'da625a0a-7c45-44cc-bc96-31af0ab5f145', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:47:4a', 'vm-uuid': '6f83d437-b6a8-434e-bb56-a982f6e9fc56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.271 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 NetworkManager[48920]: <info>  [1769075707.2722] manager: (tapda625a0a-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.272 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.276 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.277 248049 INFO os_vif [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c')#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.307 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.307 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.307 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] No VIF found with MAC fa:16:3e:4a:47:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.307 248049 INFO nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Using config drive#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.326 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.500178708 +0000 UTC m=+0.026505187 container create 7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 04:55:07 np0005591760 systemd[1]: Started libpod-conmon-7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa.scope.
Jan 22 04:55:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.55300318 +0000 UTC m=+0.079329678 container init 7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.557617671 +0000 UTC m=+0.083944160 container start 7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.558 248049 INFO nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Creating config drive at /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/disk.config#033[00m
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.560583324 +0000 UTC m=+0.086909823 container attach 7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 04:55:07 np0005591760 happy_boyd[256540]: 167 167
Jan 22 04:55:07 np0005591760 systemd[1]: libpod-7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa.scope: Deactivated successfully.
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.561877164 +0000 UTC m=+0.088203644 container died 7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.563 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4gnn2tm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-06b46715ec67def317db60b883eed836d0f9a05a0a1a27953507b5a7b587a719-merged.mount: Deactivated successfully.
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.585835557 +0000 UTC m=+0.112162036 container remove 7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:55:07 np0005591760 podman[256527]: 2026-01-22 09:55:07.489685139 +0000 UTC m=+0.016011638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:55:07 np0005591760 systemd[1]: libpod-conmon-7399940569c595cc55cb09e7710b0826bf89767321d0142586e058c0c941ceaa.scope: Deactivated successfully.
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.680 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4gnn2tm" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.700 248049 DEBUG nova.storage.rbd_utils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] rbd image 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.703 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/disk.config 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:07 np0005591760 podman[256565]: 2026-01-22 09:55:07.705704126 +0000 UTC m=+0.029454680 container create 075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_buck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:55:07 np0005591760 systemd[1]: Started libpod-conmon-075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71.scope.
Jan 22 04:55:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0547cdeee21857bd1aa52bf374c0b091dfdda9f4ee4e52cbc9071b61f20eb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0547cdeee21857bd1aa52bf374c0b091dfdda9f4ee4e52cbc9071b61f20eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0547cdeee21857bd1aa52bf374c0b091dfdda9f4ee4e52cbc9071b61f20eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0547cdeee21857bd1aa52bf374c0b091dfdda9f4ee4e52cbc9071b61f20eb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:07 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0547cdeee21857bd1aa52bf374c0b091dfdda9f4ee4e52cbc9071b61f20eb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:07 np0005591760 podman[256565]: 2026-01-22 09:55:07.774958862 +0000 UTC m=+0.098709436 container init 075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:55:07 np0005591760 podman[256565]: 2026-01-22 09:55:07.780289995 +0000 UTC m=+0.104040550 container start 075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_buck, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:55:07 np0005591760 podman[256565]: 2026-01-22 09:55:07.781492534 +0000 UTC m=+0.105243088 container attach 075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_buck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:55:07 np0005591760 podman[256565]: 2026-01-22 09:55:07.69391944 +0000 UTC m=+0.017670015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.806 248049 DEBUG oslo_concurrency.processutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/disk.config 6f83d437-b6a8-434e-bb56-a982f6e9fc56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.807 248049 INFO nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Deleting local config drive /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56/disk.config because it was imported into RBD.#033[00m
Jan 22 04:55:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:07 np0005591760 kernel: tapda625a0a-7c: entered promiscuous mode
Jan 22 04:55:07 np0005591760 NetworkManager[48920]: <info>  [1769075707.8471] manager: (tapda625a0a-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.848 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:07Z|00061|binding|INFO|Claiming lport da625a0a-7c45-44cc-bc96-31af0ab5f145 for this chassis.
Jan 22 04:55:07 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:07Z|00062|binding|INFO|da625a0a-7c45-44cc-bc96-31af0ab5f145: Claiming fa:16:3e:4a:47:4a 10.100.0.4
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.859 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:47:4a 10.100.0.4'], port_security=['fa:16:3e:4a:47:4a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6f83d437-b6a8-434e-bb56-a982f6e9fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf63f47-2032-4eba-953d-74633ed782c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4cb18b-de28-4a4b-ac8c-f9a794f102d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86d90621-61cf-4b66-b93b-cff275d8d278, chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=da625a0a-7c45-44cc-bc96-31af0ab5f145) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.860 164103 INFO neutron.agent.ovn.metadata.agent [-] Port da625a0a-7c45-44cc-bc96-31af0ab5f145 in datapath daf63f47-2032-4eba-953d-74633ed782c9 bound to our chassis#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.861 164103 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf63f47-2032-4eba-953d-74633ed782c9#033[00m
Jan 22 04:55:07 np0005591760 systemd-udevd[256632]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.872 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[51312cef-c725-48ec-8df7-7972e9fd772e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.877 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdaf63f47-21 in ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 04:55:07 np0005591760 systemd-machined[216371]: New machine qemu-3-instance-00000005.
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.880 253045 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdaf63f47-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.880 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3d6d67-efd1-4a10-ac27-059aab83328e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.881 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8470fa-2ab4-4bbc-8dfd-1d9b0644e5cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 NetworkManager[48920]: <info>  [1769075707.8833] device (tapda625a0a-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 04:55:07 np0005591760 NetworkManager[48920]: <info>  [1769075707.8837] device (tapda625a0a-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 04:55:07 np0005591760 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Jan 22 04:55:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.894 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[9437613d-5d33-45de-ab8b-87ca0c0dac65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.919 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[c622924b-de19-4fbf-abeb-e0931d1f11e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.943 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[03598704-d238-453a-9543-b6a60ba1b21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.949 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[d7221322-0eff-4a80-86e4-48d222b1e162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 NetworkManager[48920]: <info>  [1769075707.9501] manager: (tapdaf63f47-20): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.956 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:07Z|00063|binding|INFO|Setting lport da625a0a-7c45-44cc-bc96-31af0ab5f145 ovn-installed in OVS
Jan 22 04:55:07 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:07Z|00064|binding|INFO|Setting lport da625a0a-7c45-44cc-bc96-31af0ab5f145 up in Southbound
Jan 22 04:55:07 np0005591760 nova_compute[248045]: 2026-01-22 09:55:07.965 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.988 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbe5e8a-3c2f-42a1-a024-293e312390cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:07 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:07.990 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbcb417-04ca-4cbe-a10c-2fdf7b5fc9b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 NetworkManager[48920]: <info>  [1769075708.0088] device (tapdaf63f47-20): carrier: link connected
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.013 253060 DEBUG oslo.privsep.daemon [-] privsep: reply[6bece512-c373-40e4-8027-40781be88abe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.026 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[2372418c-4141-4d7e-a384-84cdd44b37ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf63f47-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:44:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 328738, 'reachable_time': 41249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256664, 'error': None, 'target': 'ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.040 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[d21398df-9d84-45db-837b-26d817044d7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:44ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 328738, 'tstamp': 328738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256665, 'error': None, 'target': 'ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.054 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[9e27c9fc-1f0a-4e30-9f6e-c994fb0a1da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf63f47-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:44:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 328738, 'reachable_time': 41249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256668, 'error': None, 'target': 'ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 dazzling_buck[256597]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:55:08 np0005591760 dazzling_buck[256597]: --> All data devices are unavailable
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.078 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[54595a15-d113-476c-8e2e-0f228dfec481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 systemd[1]: libpod-075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71.scope: Deactivated successfully.
Jan 22 04:55:08 np0005591760 podman[256565]: 2026-01-22 09:55:08.094258004 +0000 UTC m=+0.418008557 container died 075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 04:55:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4b0547cdeee21857bd1aa52bf374c0b091dfdda9f4ee4e52cbc9071b61f20eb3-merged.mount: Deactivated successfully.
Jan 22 04:55:08 np0005591760 podman[256565]: 2026-01-22 09:55:08.12647925 +0000 UTC m=+0.450229803 container remove 075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.136 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[d67bdc16-82f3-42fd-9e99-641003787482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.137 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf63f47-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.138 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.138 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf63f47-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:08 np0005591760 systemd[1]: libpod-conmon-075bf382c1def843a4bbcd4f8df8a234cd8a9445d82353a1934101c88055fa71.scope: Deactivated successfully.
Jan 22 04:55:08 np0005591760 kernel: tapdaf63f47-20: entered promiscuous mode
Jan 22 04:55:08 np0005591760 NetworkManager[48920]: <info>  [1769075708.1422] manager: (tapdaf63f47-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.145 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf63f47-20, col_values=(('external_ids', {'iface-id': 'ba37cdb0-8f86-4e41-9351-622b0e48545a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:08 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:08Z|00065|binding|INFO|Releasing lport ba37cdb0-8f86-4e41-9351-622b0e48545a from this chassis (sb_readonly=0)
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.145 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.150 164103 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/daf63f47-2032-4eba-953d-74633ed782c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/daf63f47-2032-4eba-953d-74633ed782c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.156 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb331de-8d0c-425b-8c5e-ea7774544462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.157 164103 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: global
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    log         /dev/log local0 debug
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    log-tag     haproxy-metadata-proxy-daf63f47-2032-4eba-953d-74633ed782c9
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    user        root
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    group       root
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    maxconn     1024
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    pidfile     /var/lib/neutron/external/pids/daf63f47-2032-4eba-953d-74633ed782c9.pid.haproxy
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    daemon
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: defaults
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    log global
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    mode http
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    option httplog
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    option dontlognull
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    option http-server-close
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    option forwardfor
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    retries                 3
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    timeout http-request    30s
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    timeout connect         30s
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    timeout client          32s
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    timeout server          32s
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    timeout http-keep-alive 30s
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: listen listener
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    bind 169.254.169.254:80
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]:    http-request add-header X-OVN-Network-ID daf63f47-2032-4eba-953d-74633ed782c9
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 04:55:08 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:08.159 164103 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9', 'env', 'PROCESS_TAG=haproxy-daf63f47-2032-4eba-953d-74633ed782c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/daf63f47-2032-4eba-953d-74633ed782c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.174 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:08 np0005591760 podman[256675]: 2026-01-22 09:55:08.207968719 +0000 UTC m=+0.093268505 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.302 248049 DEBUG nova.compute.manager [req-3d9ec442-6eb1-4c1f-a12a-f691bde75526 req-30769858-e299-4322-8fed-82c3b993bfe4 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.303 248049 DEBUG oslo_concurrency.lockutils [req-3d9ec442-6eb1-4c1f-a12a-f691bde75526 req-30769858-e299-4322-8fed-82c3b993bfe4 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.303 248049 DEBUG oslo_concurrency.lockutils [req-3d9ec442-6eb1-4c1f-a12a-f691bde75526 req-30769858-e299-4322-8fed-82c3b993bfe4 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.303 248049 DEBUG oslo_concurrency.lockutils [req-3d9ec442-6eb1-4c1f-a12a-f691bde75526 req-30769858-e299-4322-8fed-82c3b993bfe4 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.304 248049 DEBUG nova.compute.manager [req-3d9ec442-6eb1-4c1f-a12a-f691bde75526 req-30769858-e299-4322-8fed-82c3b993bfe4 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Processing event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.406 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.408 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075708.4071553, 6f83d437-b6a8-434e-bb56-a982f6e9fc56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.408 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] VM Started (Lifecycle Event)#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.413 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.415 248049 INFO nova.virt.libvirt.driver [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Instance spawned successfully.#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.415 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 04:55:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:08.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.426 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.430 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.432 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.433 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.433 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.433 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.434 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.434 248049 DEBUG nova.virt.libvirt.driver [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.450 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.450 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075708.4074605, 6f83d437-b6a8-434e-bb56-a982f6e9fc56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.450 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] VM Paused (Lifecycle Event)#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.473 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.476 248049 DEBUG nova.virt.driver [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] Emitting event <LifecycleEvent: 1769075708.4132433, 6f83d437-b6a8-434e-bb56-a982f6e9fc56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.477 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] VM Resumed (Lifecycle Event)#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.481 248049 INFO nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Took 3.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.481 248049 DEBUG nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.487 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.490 248049 DEBUG nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 04:55:08 np0005591760 podman[256833]: 2026-01-22 09:55:08.504974467 +0000 UTC m=+0.042027130 container create 858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.514 248049 INFO nova.compute.manager [None req-2d54bb4f-1c3b-4943-abea-08b9c7ba11c2 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.528 248049 INFO nova.compute.manager [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Took 4.39 seconds to build instance.#033[00m
Jan 22 04:55:08 np0005591760 nova_compute[248045]: 2026-01-22 09:55:08.541 248049 DEBUG oslo_concurrency.lockutils [None req-f89ecbc0-760e-45b0-b002-4fdc2efce31a 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:08 np0005591760 systemd[1]: Started libpod-conmon-858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75.scope.
Jan 22 04:55:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:08 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08838bc1cad1fcb0b5842ca5d3a098dfd219acc856fd76c4a7d228b9f7f9d167/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:08 np0005591760 podman[256833]: 2026-01-22 09:55:08.571902065 +0000 UTC m=+0.108954737 container init 858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 04:55:08 np0005591760 podman[256833]: 2026-01-22 09:55:08.576087959 +0000 UTC m=+0.113140621 container start 858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 04:55:08 np0005591760 podman[256833]: 2026-01-22 09:55:08.49067508 +0000 UTC m=+0.027727753 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 22 04:55:08 np0005591760 neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9[256864]: [NOTICE]   (256869) : New worker (256873) forked
Jan 22 04:55:08 np0005591760 neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9[256864]: [NOTICE]   (256869) : Loading success.
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.626277019 +0000 UTC m=+0.027029234 container create 7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:55:08 np0005591760 systemd[1]: Started libpod-conmon-7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a.scope.
Jan 22 04:55:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.678627187 +0000 UTC m=+0.079379392 container init 7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.683516437 +0000 UTC m=+0.084268641 container start 7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.684810056 +0000 UTC m=+0.085562281 container attach 7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:55:08 np0005591760 reverent_blackburn[256890]: 167 167
Jan 22 04:55:08 np0005591760 systemd[1]: libpod-7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a.scope: Deactivated successfully.
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.688488894 +0000 UTC m=+0.089241099 container died 7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:55:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a8f4bbda747dd8a983c79c8da85cb28f68230283eb5ef5b9362c0e03ca2d45ab-merged.mount: Deactivated successfully.
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.711624205 +0000 UTC m=+0.112376410 container remove 7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:55:08 np0005591760 podman[256871]: 2026-01-22 09:55:08.615054366 +0000 UTC m=+0.015806591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:55:08 np0005591760 systemd[1]: libpod-conmon-7f1f1d7c17976dfc34e5710e870b1efec90fac36075f6c7d9709d0256b50342a.scope: Deactivated successfully.
Jan 22 04:55:08 np0005591760 podman[256913]: 2026-01-22 09:55:08.845036765 +0000 UTC m=+0.028595659 container create 8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_proskuriakova, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:55:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:08.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:08 np0005591760 systemd[1]: Started libpod-conmon-8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b.scope.
Jan 22 04:55:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:08.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:08.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:08.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e183ca61ee3ba27a3fe265fff1193ef3c9fd7a891fa28c40719cf3e497eb4baf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e183ca61ee3ba27a3fe265fff1193ef3c9fd7a891fa28c40719cf3e497eb4baf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e183ca61ee3ba27a3fe265fff1193ef3c9fd7a891fa28c40719cf3e497eb4baf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e183ca61ee3ba27a3fe265fff1193ef3c9fd7a891fa28c40719cf3e497eb4baf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:08 np0005591760 podman[256913]: 2026-01-22 09:55:08.902413722 +0000 UTC m=+0.085972626 container init 8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:55:08 np0005591760 podman[256913]: 2026-01-22 09:55:08.907402841 +0000 UTC m=+0.090961735 container start 8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:55:08 np0005591760 podman[256913]: 2026-01-22 09:55:08.908591192 +0000 UTC m=+0.092150086 container attach 8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:55:08 np0005591760 podman[256913]: 2026-01-22 09:55:08.833279773 +0000 UTC m=+0.016838667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:55:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 2.1 MiB/s wr, 122 op/s
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]: {
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:    "0": [
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:        {
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "devices": [
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "/dev/loop3"
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            ],
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "lv_name": "ceph_lv0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "lv_size": "21470642176",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "name": "ceph_lv0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "tags": {
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.cluster_name": "ceph",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.crush_device_class": "",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.encrypted": "0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.osd_id": "0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.type": "block",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.vdo": "0",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:                "ceph.with_tpm": "0"
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            },
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "type": "block",
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:            "vg_name": "ceph_vg0"
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:        }
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]:    ]
Jan 22 04:55:09 np0005591760 quizzical_proskuriakova[256926]: }
Jan 22 04:55:09 np0005591760 systemd[1]: libpod-8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b.scope: Deactivated successfully.
Jan 22 04:55:09 np0005591760 podman[256913]: 2026-01-22 09:55:09.150733504 +0000 UTC m=+0.334292398 container died 8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_proskuriakova, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:55:09 np0005591760 podman[256913]: 2026-01-22 09:55:09.172738574 +0000 UTC m=+0.356297468 container remove 8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_proskuriakova, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 04:55:09 np0005591760 systemd[1]: libpod-conmon-8bf36abb92ba02b88299126e1317c5f2b42ddcf9f01c4ebc59f35211a5fd5d3b.scope: Deactivated successfully.
Jan 22 04:55:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e183ca61ee3ba27a3fe265fff1193ef3c9fd7a891fa28c40719cf3e497eb4baf-merged.mount: Deactivated successfully.
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.645812356 +0000 UTC m=+0.036613370 container create fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 04:55:09 np0005591760 systemd[1]: Started libpod-conmon-fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f.scope.
Jan 22 04:55:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.706994199 +0000 UTC m=+0.097795202 container init fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_goldstine, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.712495202 +0000 UTC m=+0.103296206 container start fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.713618271 +0000 UTC m=+0.104419265 container attach fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:55:09 np0005591760 priceless_goldstine[257039]: 167 167
Jan 22 04:55:09 np0005591760 systemd[1]: libpod-fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f.scope: Deactivated successfully.
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.716277455 +0000 UTC m=+0.107078469 container died fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.62989608 +0000 UTC m=+0.020697104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:55:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-26e66a1a24e8df386c8cbe7d1ce681ef4b0a92401d7f1f956fafde634abc48c4-merged.mount: Deactivated successfully.
Jan 22 04:55:09 np0005591760 podman[257026]: 2026-01-22 09:55:09.738801072 +0000 UTC m=+0.129602076 container remove fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:55:09 np0005591760 systemd[1]: libpod-conmon-fb60db23e8512ff6b86f06ee9318453af73abe6be81dc2cd1325a8c62a06703f.scope: Deactivated successfully.
Jan 22 04:55:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:55:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:09.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:55:09 np0005591760 podman[257060]: 2026-01-22 09:55:09.893746762 +0000 UTC m=+0.033030293 container create ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 04:55:09 np0005591760 systemd[1]: Started libpod-conmon-ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386.scope.
Jan 22 04:55:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:55:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c66036f051e045ecf599fa08e7a37909e5c74620be9642c12e2690c17bb3b21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c66036f051e045ecf599fa08e7a37909e5c74620be9642c12e2690c17bb3b21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c66036f051e045ecf599fa08e7a37909e5c74620be9642c12e2690c17bb3b21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c66036f051e045ecf599fa08e7a37909e5c74620be9642c12e2690c17bb3b21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:55:09 np0005591760 podman[257060]: 2026-01-22 09:55:09.963756343 +0000 UTC m=+0.103039863 container init ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Jan 22 04:55:09 np0005591760 podman[257060]: 2026-01-22 09:55:09.969341035 +0000 UTC m=+0.108624555 container start ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:55:09 np0005591760 podman[257060]: 2026-01-22 09:55:09.970642678 +0000 UTC m=+0.109926199 container attach ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:55:09 np0005591760 podman[257060]: 2026-01-22 09:55:09.8813644 +0000 UTC m=+0.020647931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:55:10 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:10.183 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.323 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.354 248049 DEBUG nova.compute.manager [req-62eb04c4-2483-45c2-a8b5-43edddbb051f req-8655b4b5-f7e6-4a47-910e-864ec9dac7fa e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.355 248049 DEBUG oslo_concurrency.lockutils [req-62eb04c4-2483-45c2-a8b5-43edddbb051f req-8655b4b5-f7e6-4a47-910e-864ec9dac7fa e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.355 248049 DEBUG oslo_concurrency.lockutils [req-62eb04c4-2483-45c2-a8b5-43edddbb051f req-8655b4b5-f7e6-4a47-910e-864ec9dac7fa e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.355 248049 DEBUG oslo_concurrency.lockutils [req-62eb04c4-2483-45c2-a8b5-43edddbb051f req-8655b4b5-f7e6-4a47-910e-864ec9dac7fa e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.355 248049 DEBUG nova.compute.manager [req-62eb04c4-2483-45c2-a8b5-43edddbb051f req-8655b4b5-f7e6-4a47-910e-864ec9dac7fa e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] No waiting events found dispatching network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.356 248049 WARNING nova.compute.manager [req-62eb04c4-2483-45c2-a8b5-43edddbb051f req-8655b4b5-f7e6-4a47-910e-864ec9dac7fa e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received unexpected event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 for instance with vm_state active and task_state None.#033[00m
Jan 22 04:55:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:10.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:10 np0005591760 lvm[257149]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:55:10 np0005591760 lvm[257149]: VG ceph_vg0 finished
Jan 22 04:55:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:10 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a8c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:10 np0005591760 exciting_mayer[257074]: {}
Jan 22 04:55:10 np0005591760 systemd[1]: libpod-ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386.scope: Deactivated successfully.
Jan 22 04:55:10 np0005591760 podman[257060]: 2026-01-22 09:55:10.594349979 +0000 UTC m=+0.733633500 container died ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:55:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1c66036f051e045ecf599fa08e7a37909e5c74620be9642c12e2690c17bb3b21-merged.mount: Deactivated successfully.
Jan 22 04:55:10 np0005591760 podman[257060]: 2026-01-22 09:55:10.631571134 +0000 UTC m=+0.770854644 container remove ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 04:55:10 np0005591760 systemd[1]: libpod-conmon-ebb5ec56fdd262072477bbbe88f8752d6a4def4b52c6cd4aae0a5829434c4386.scope: Deactivated successfully.
Jan 22 04:55:10 np0005591760 NetworkManager[48920]: <info>  [1769075710.6906] manager: (patch-br-int-to-provnet-397c94eb-88af-4737-bae3-7adb982d097b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 22 04:55:10 np0005591760 NetworkManager[48920]: <info>  [1769075710.6914] manager: (patch-provnet-397c94eb-88af-4737-bae3-7adb982d097b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 22 04:55:10 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:10Z|00066|binding|INFO|Releasing lport ba37cdb0-8f86-4e41-9351-622b0e48545a from this chassis (sb_readonly=0)
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.691 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:55:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:55:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:10 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:10Z|00067|binding|INFO|Releasing lport ba37cdb0-8f86-4e41-9351-622b0e48545a from this chassis (sb_readonly=0)
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.745 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.747 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.876 248049 DEBUG nova.compute.manager [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-changed-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.876 248049 DEBUG nova.compute.manager [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Refreshing instance network info cache due to event network-changed-da625a0a-7c45-44cc-bc96-31af0ab5f145. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.876 248049 DEBUG oslo_concurrency.lockutils [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.877 248049 DEBUG oslo_concurrency.lockutils [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquired lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:55:10 np0005591760 nova_compute[248045]: 2026-01-22 09:55:10.877 248049 DEBUG nova.network.neutron [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Refreshing network info cache for port da625a0a-7c45-44cc-bc96-31af0ab5f145 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 04:55:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a8c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Jan 22 04:55:11 np0005591760 nova_compute[248045]: 2026-01-22 09:55:11.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:11 np0005591760 nova_compute[248045]: 2026-01-22 09:55:11.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:11 np0005591760 nova_compute[248045]: 2026-01-22 09:55:11.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:11 np0005591760 nova_compute[248045]: 2026-01-22 09:55:11.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:55:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.272 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.317 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.317 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:12.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:12 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:55:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563209745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.701 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.753 248049 DEBUG nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.753 248049 DEBUG nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.897 248049 DEBUG nova.network.neutron [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updated VIF entry in instance network info cache for port da625a0a-7c45-44cc-bc96-31af0ab5f145. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.898 248049 DEBUG nova.network.neutron [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updating instance_info_cache with network_info: [{"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:55:12 np0005591760 nova_compute[248045]: 2026-01-22 09:55:12.911 248049 DEBUG oslo_concurrency.lockutils [req-79d4fbda-2158-47c6-8cee-d0f071d2cfce req-6609447d-dbf8-4e41-93e1-f01b1c3f92f0 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Releasing lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.038 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.039 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4416MB free_disk=59.92180633544922GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.039 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.039 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.361 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Instance 6f83d437-b6a8-434e-bb56-a982f6e9fc56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.361 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.361 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.385 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:55:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576615174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.774 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.778 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.793 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.806 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:55:13 np0005591760 nova_compute[248045]: 2026-01-22 09:55:13.806 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:14 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c005fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:14 np0005591760 nova_compute[248045]: 2026-01-22 09:55:14.807 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:14 np0005591760 nova_compute[248045]: 2026-01-22 09:55:14.808 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:14 np0005591760 nova_compute[248045]: 2026-01-22 09:55:14.808 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.324 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.876 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.877 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquired lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.877 248049 DEBUG nova.network.neutron [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 04:55:15 np0005591760 nova_compute[248045]: 2026-01-22 09:55:15.877 248049 DEBUG nova.objects.instance [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6f83d437-b6a8-434e-bb56-a982f6e9fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:55:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:16.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:16 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:16 np0005591760 nova_compute[248045]: 2026-01-22 09:55:16.897 248049 DEBUG nova.network.neutron [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updating instance_info_cache with network_info: [{"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:55:16 np0005591760 nova_compute[248045]: 2026-01-22 09:55:16.909 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Releasing lock "refresh_cache-6f83d437-b6a8-434e-bb56-a982f6e9fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 04:55:16 np0005591760 nova_compute[248045]: 2026-01-22 09:55:16.909 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:17.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:17.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:17.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:17.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c005fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 137 op/s
Jan 22 04:55:17 np0005591760 nova_compute[248045]: 2026-01-22 09:55:17.274 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:18 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:18.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:18.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:18.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:18.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07800a020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 117 op/s
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:55:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:55:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c005fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:19.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:20 np0005591760 nova_compute[248045]: 2026-01-22 09:55:20.326 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:55:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:20.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:55:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:20 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095520 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:55:20 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:20Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:47:4a 10.100.0.4
Jan 22 04:55:20 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:20Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:47:4a 10.100.0.4
Jan 22 04:55:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Jan 22 04:55:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:21.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:22 np0005591760 nova_compute[248045]: 2026-01-22 09:55:22.276 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:55:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:22.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:55:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:22 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 04:55:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:24.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:24 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 04:55:25 np0005591760 nova_compute[248045]: 2026-01-22 09:55:25.327 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:25.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095526 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:55:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:55:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:55:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:26 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:27.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:27.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:27.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:27.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.278 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.281 248049 INFO nova.compute.manager [None req-c2f86f06-1cb6-4929-ae4e-0f59ce89ba30 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Get console output#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.284 253225 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.554 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.554 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.555 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.555 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.555 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.556 248049 INFO nova.compute.manager [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Terminating instance#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.557 248049 DEBUG nova.compute.manager [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 04:55:27 np0005591760 kernel: tapda625a0a-7c (unregistering): left promiscuous mode
Jan 22 04:55:27 np0005591760 NetworkManager[48920]: <info>  [1769075727.5903] device (tapda625a0a-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.598 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:27Z|00068|binding|INFO|Releasing lport da625a0a-7c45-44cc-bc96-31af0ab5f145 from this chassis (sb_readonly=0)
Jan 22 04:55:27 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:27Z|00069|binding|INFO|Setting lport da625a0a-7c45-44cc-bc96-31af0ab5f145 down in Southbound
Jan 22 04:55:27 np0005591760 ovn_controller[154073]: 2026-01-22T09:55:27Z|00070|binding|INFO|Removing iface tapda625a0a-7c ovn-installed in OVS
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:27] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:55:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:27] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.607 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:47:4a 10.100.0.4'], port_security=['fa:16:3e:4a:47:4a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6f83d437-b6a8-434e-bb56-a982f6e9fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf63f47-2032-4eba-953d-74633ed782c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05af97dae0f4449ba7eb640bcd3f61e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4cb18b-de28-4a4b-ac8c-f9a794f102d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86d90621-61cf-4b66-b93b-cff275d8d278, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>], logical_port=da625a0a-7c45-44cc-bc96-31af0ab5f145) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4a0d293700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.608 164103 INFO neutron.agent.ovn.metadata.agent [-] Port da625a0a-7c45-44cc-bc96-31af0ab5f145 in datapath daf63f47-2032-4eba-953d-74633ed782c9 unbound from our chassis#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.609 164103 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network daf63f47-2032-4eba-953d-74633ed782c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.609 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[587bd2b4-4968-473e-94b3-c2590d8dcbdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.610 164103 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9 namespace which is not needed anymore#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.630 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 22 04:55:27 np0005591760 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 11.201s CPU time.
Jan 22 04:55:27 np0005591760 systemd-machined[216371]: Machine qemu-3-instance-00000005 terminated.
Jan 22 04:55:27 np0005591760 neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9[256864]: [NOTICE]   (256869) : haproxy version is 2.8.14-c23fe91
Jan 22 04:55:27 np0005591760 neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9[256864]: [NOTICE]   (256869) : path to executable is /usr/sbin/haproxy
Jan 22 04:55:27 np0005591760 neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9[256864]: [ALERT]    (256869) : Current worker (256873) exited with code 143 (Terminated)
Jan 22 04:55:27 np0005591760 neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9[256864]: [WARNING]  (256869) : All workers exited. Exiting... (0)
Jan 22 04:55:27 np0005591760 systemd[1]: libpod-858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75.scope: Deactivated successfully.
Jan 22 04:55:27 np0005591760 podman[257295]: 2026-01-22 09:55:27.704533706 +0000 UTC m=+0.033324628 container died 858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 04:55:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75-userdata-shm.mount: Deactivated successfully.
Jan 22 04:55:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-08838bc1cad1fcb0b5842ca5d3a098dfd219acc856fd76c4a7d228b9f7f9d167-merged.mount: Deactivated successfully.
Jan 22 04:55:27 np0005591760 podman[257295]: 2026-01-22 09:55:27.725920961 +0000 UTC m=+0.054711882 container cleanup 858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:55:27 np0005591760 systemd[1]: libpod-conmon-858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75.scope: Deactivated successfully.
Jan 22 04:55:27 np0005591760 podman[257319]: 2026-01-22 09:55:27.770654814 +0000 UTC m=+0.028566064 container remove 858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.774 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[17e63f77-fee8-4c0f-bb16-22d6c4adc1aa]: (4, ('Thu Jan 22 09:55:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9 (858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75)\n858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75\nThu Jan 22 09:55:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9 (858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75)\n858b712d3055ca9abf8b973f343b071bbb206c9961688a3eda82f81bef4a0b75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.777 248049 INFO nova.virt.libvirt.driver [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Instance destroyed successfully.#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.777 248049 DEBUG nova.objects.instance [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lazy-loading 'resources' on Instance uuid 6f83d437-b6a8-434e-bb56-a982f6e9fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.776 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[28139584-423c-450d-8a75-30d3a29e32ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.779 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf63f47-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.780 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.786 248049 DEBUG nova.virt.libvirt.vif [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T09:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-779915831',display_name='tempest-TestNetworkBasicOps-server-779915831',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-779915831',id=5,image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG5hZ2SSN9qQ+kkgmxWlbRd4sPfjlE083NoFOfh7oQGcHaMZfssTRROyFb/ADgbinP/yrXxAEyHFmiWSiPhZKwsWabVnEfaQ0pCP9lA/btSHB3hIICOVyzi0KxylceZGyA==',key_name='tempest-TestNetworkBasicOps-1348098295',keypairs=<?>,launch_index=0,launched_at=2026-01-22T09:55:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05af97dae0f4449ba7eb640bcd3f61e6',ramdisk_id='',reservation_id='r-dlzm9rtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bb9741cf-1bcc-4b9c-affa-dda3b9a7c93d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-349110285',owner_user_name='tempest-TestNetworkBasicOps-349110285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T09:55:08Z,user_data=None,user_id='4428dd9b0fb64c25b8f33b0050d4ef6f',uuid=6f83d437-b6a8-434e-bb56-a982f6e9fc56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.787 248049 DEBUG nova.network.os_vif_util [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converting VIF {"id": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "address": "fa:16:3e:4a:47:4a", "network": {"id": "daf63f47-2032-4eba-953d-74633ed782c9", "bridge": "br-int", "label": "tempest-network-smoke--643927133", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05af97dae0f4449ba7eb640bcd3f61e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda625a0a-7c", "ovs_interfaceid": "da625a0a-7c45-44cc-bc96-31af0ab5f145", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.787 248049 DEBUG nova.network.os_vif_util [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.788 248049 DEBUG os_vif [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.789 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.789 248049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda625a0a-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.790 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.791 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 04:55:27 np0005591760 kernel: tapdaf63f47-20: left promiscuous mode
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.800 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.802 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.804 248049 INFO os_vif [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:47:4a,bridge_name='br-int',has_traffic_filtering=True,id=da625a0a-7c45-44cc-bc96-31af0ab5f145,network=Network(daf63f47-2032-4eba-953d-74633ed782c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda625a0a-7c')#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.804 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4898eb-aefc-486d-a61f-cfc09e6071b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.814 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[007966d7-17f4-47e3-adf8-db14fc84bd13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.815 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[6da7b040-b98b-4de1-b7e4-d154aae5954e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.828 253045 DEBUG oslo.privsep.daemon [-] privsep: reply[e6215011-4595-4f3d-8fd4-135dd51643dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 328731, 'reachable_time': 37324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257354, 'error': None, 'target': 'ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.832 164492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-daf63f47-2032-4eba-953d-74633ed782c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 04:55:27 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:27.832 164492 DEBUG oslo.privsep.daemon [-] privsep: reply[3d74d945-3df4-42a8-ba63-f9f693eee03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 04:55:27 np0005591760 systemd[1]: run-netns-ovnmeta\x2ddaf63f47\x2d2032\x2d4eba\x2d953d\x2d74633ed782c9.mount: Deactivated successfully.
Jan 22 04:55:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:27.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.975 248049 INFO nova.virt.libvirt.driver [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Deleting instance files /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56_del#033[00m
Jan 22 04:55:27 np0005591760 nova_compute[248045]: 2026-01-22 09:55:27.975 248049 INFO nova.virt.libvirt.driver [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Deletion of /var/lib/nova/instances/6f83d437-b6a8-434e-bb56-a982f6e9fc56_del complete#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.045 248049 INFO nova.compute.manager [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.045 248049 DEBUG oslo.service.loopingcall [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.045 248049 DEBUG nova.compute.manager [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.046 248049 DEBUG nova.network.neutron [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.101 248049 DEBUG nova.compute.manager [req-af898fcf-9bc5-48b9-878f-4f47f6afa6ae req-fb24f875-940b-44a9-9f86-2a0f86c2f40a e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-vif-unplugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.102 248049 DEBUG oslo_concurrency.lockutils [req-af898fcf-9bc5-48b9-878f-4f47f6afa6ae req-fb24f875-940b-44a9-9f86-2a0f86c2f40a e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.102 248049 DEBUG oslo_concurrency.lockutils [req-af898fcf-9bc5-48b9-878f-4f47f6afa6ae req-fb24f875-940b-44a9-9f86-2a0f86c2f40a e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.102 248049 DEBUG oslo_concurrency.lockutils [req-af898fcf-9bc5-48b9-878f-4f47f6afa6ae req-fb24f875-940b-44a9-9f86-2a0f86c2f40a e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.102 248049 DEBUG nova.compute.manager [req-af898fcf-9bc5-48b9-878f-4f47f6afa6ae req-fb24f875-940b-44a9-9f86-2a0f86c2f40a e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] No waiting events found dispatching network-vif-unplugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.103 248049 DEBUG nova.compute.manager [req-af898fcf-9bc5-48b9-878f-4f47f6afa6ae req-fb24f875-940b-44a9-9f86-2a0f86c2f40a e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-vif-unplugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 04:55:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:28 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:55:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:28 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:28.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 9 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:28.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:28.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:28.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.918 248049 DEBUG nova.network.neutron [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.927 248049 INFO nova.compute.manager [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Took 0.88 seconds to deallocate network for instance.#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.968 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:28 np0005591760 nova_compute[248045]: 2026-01-22 09:55:28.968 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.019 248049 DEBUG oslo_concurrency.processutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:55:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 22 04:55:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:55:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421455102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.365 248049 DEBUG oslo_concurrency.processutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.369 248049 DEBUG nova.compute.provider_tree [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.379 248049 DEBUG nova.scheduler.client.report [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.395 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.413 248049 INFO nova.scheduler.client.report [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Deleted allocations for instance 6f83d437-b6a8-434e-bb56-a982f6e9fc56#033[00m
Jan 22 04:55:29 np0005591760 nova_compute[248045]: 2026-01-22 09:55:29.463 248049 DEBUG oslo_concurrency.lockutils [None req-788e6152-01e4-42df-be69-6095950650d9 4428dd9b0fb64c25b8f33b0050d4ef6f 05af97dae0f4449ba7eb640bcd3f61e6 - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:29.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.159 248049 DEBUG nova.compute.manager [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.159 248049 DEBUG oslo_concurrency.lockutils [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Acquiring lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.160 248049 DEBUG oslo_concurrency.lockutils [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.160 248049 DEBUG oslo_concurrency.lockutils [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] Lock "6f83d437-b6a8-434e-bb56-a982f6e9fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.160 248049 DEBUG nova.compute.manager [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] No waiting events found dispatching network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.160 248049 WARNING nova.compute.manager [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received unexpected event network-vif-plugged-da625a0a-7c45-44cc-bc96-31af0ab5f145 for instance with vm_state deleted and task_state None.#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.160 248049 DEBUG nova.compute.manager [req-11a7e1d3-cb8e-4c3a-b162-31b907d11221 req-46a4a808-82b1-40c3-8a00-2e8250cd41c3 e60ff740af6c4003b4590e5dcca11e4e 68e0da8184214c3cb30cd8a6d6c3704d - - default default] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Received event network-vif-deleted-da625a0a-7c45-44cc-bc96-31af0ab5f145 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 04:55:30 np0005591760 nova_compute[248045]: 2026-01-22 09:55:30.329 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:30.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:30 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 22 04:55:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:55:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:55:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:55:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:55:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:32.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:32 np0005591760 nova_compute[248045]: 2026-01-22 09:55:32.790 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 22 04:55:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:33.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:34 np0005591760 podman[257417]: 2026-01-22 09:55:34.059321173 +0000 UTC m=+0.045193291 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 04:55:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:34.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:34 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:34 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:55:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:34 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:55:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 22 04:55:35 np0005591760 nova_compute[248045]: 2026-01-22 09:55:35.331 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:35.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:37.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:37.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:37.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:37.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 17 KiB/s wr, 60 op/s
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:37] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:37] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:37 np0005591760 nova_compute[248045]: 2026-01-22 09:55:37.791 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:55:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:37.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:55:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:55:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:38.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:38 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:38.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:38.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:38.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:38.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:39 np0005591760 podman[257438]: 2026-01-22 09:55:39.060487948 +0000 UTC m=+0.053825889 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 04:55:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 59 op/s
Jan 22 04:55:39 np0005591760 nova_compute[248045]: 2026-01-22 09:55:39.351 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:39 np0005591760 nova_compute[248045]: 2026-01-22 09:55:39.462 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:40 np0005591760 nova_compute[248045]: 2026-01-22 09:55:40.335 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:40.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:40 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095540 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:55:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:40 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 22 04:55:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 15 KiB/s wr, 61 op/s
Jan 22 04:55:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:41.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:42.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:42 np0005591760 nova_compute[248045]: 2026-01-22 09:55:42.775 248049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769075727.774421, 6f83d437-b6a8-434e-bb56-a982f6e9fc56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 04:55:42 np0005591760 nova_compute[248045]: 2026-01-22 09:55:42.775 248049 INFO nova.compute.manager [-] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] VM Stopped (Lifecycle Event)#033[00m
Jan 22 04:55:42 np0005591760 nova_compute[248045]: 2026-01-22 09:55:42.789 248049 DEBUG nova.compute.manager [None req-7f857d17-201f-44e4-b046-bf8591cf9349 - - - - - -] [instance: 6f83d437-b6a8-434e-bb56-a982f6e9fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 04:55:42 np0005591760 nova_compute[248045]: 2026-01-22 09:55:42.792 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.3 KiB/s wr, 32 op/s
Jan 22 04:55:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:43.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.329454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075744329481, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 4156643, "memory_usage": 4232360, "flush_reason": "Manual Compaction"}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075744337759, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4039220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20065, "largest_seqno": 22194, "table_properties": {"data_size": 4029739, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19882, "raw_average_key_size": 20, "raw_value_size": 4010563, "raw_average_value_size": 4096, "num_data_blocks": 258, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075537, "oldest_key_time": 1769075537, "file_creation_time": 1769075744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8354 microseconds, and 5795 cpu microseconds.
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.337807) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4039220 bytes OK
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.337819) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.338940) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.338950) EVENT_LOG_v1 {"time_micros": 1769075744338947, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.338960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4148022, prev total WAL file size 4148022, number of live WAL files 2.
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.339601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3944KB)], [44(11MB)]
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075744339622, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16606976, "oldest_snapshot_seqno": -1}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5546 keys, 14456958 bytes, temperature: kUnknown
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075744372230, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14456958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14418403, "index_size": 23597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 139966, "raw_average_key_size": 25, "raw_value_size": 14316423, "raw_average_value_size": 2581, "num_data_blocks": 972, "num_entries": 5546, "num_filter_entries": 5546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.372416) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14456958 bytes
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.372863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 508.3 rd, 442.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.0 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6070, records dropped: 524 output_compression: NoCompression
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.372876) EVENT_LOG_v1 {"time_micros": 1769075744372870, "job": 22, "event": "compaction_finished", "compaction_time_micros": 32670, "compaction_time_cpu_micros": 20313, "output_level": 6, "num_output_files": 1, "total_output_size": 14456958, "num_input_records": 6070, "num_output_records": 5546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075744373495, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075744375112, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.339552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.375152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.375155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.375156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.375157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:55:44 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:55:44.375158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:55:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:44.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:44 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.3 KiB/s wr, 32 op/s
Jan 22 04:55:45 np0005591760 nova_compute[248045]: 2026-01-22 09:55:45.336 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:45.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095546 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 22 04:55:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:55:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:46.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:55:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:46 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:47.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:47.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:47.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:47.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 33 op/s
Jan 22 04:55:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:47.313 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:55:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:47.314 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:55:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:55:47.314 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:47] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:47] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:55:47 np0005591760 nova_compute[248045]: 2026-01-22 09:55:47.793 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:47.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:48.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:48.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:48.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:48.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:48.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:55:49
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'images']
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:55:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:55:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c007db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:49.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:50 np0005591760 nova_compute[248045]: 2026-01-22 09:55:50.338 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:50 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 22 04:55:51 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 04:55:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:51.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:55:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:55:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940bf2f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:52 np0005591760 nova_compute[248045]: 2026-01-22 09:55:52.794 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Jan 22 04:55:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:53.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:54 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940bfe30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Jan 22 04:55:55 np0005591760 nova_compute[248045]: 2026-01-22 09:55:55.340 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:55.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:55:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:56.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:55:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:56 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:57.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:57.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:57.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:57.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:57] "GET /metrics HTTP/1.1" 200 48588 "" "Prometheus/2.51.0"
Jan 22 04:55:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:55:57] "GET /metrics HTTP/1.1" 200 48588 "" "Prometheus/2.51.0"
Jan 22 04:55:57 np0005591760 nova_compute[248045]: 2026-01-22 09:55:57.795 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:55:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940bfe30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:55:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:55:58.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:55:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:58 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095558 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:55:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:58.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:58.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:55:58.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:55:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:55:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:55:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:55:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:55:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:55:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:55:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:55:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:00 np0005591760 nova_compute[248045]: 2026-01-22 09:56:00.342 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:00 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940bfe30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 04:56:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:02 np0005591760 nova_compute[248045]: 2026-01-22 09:56:02.796 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c12a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 04:56:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:04 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:05 np0005591760 podman[257515]: 2026-01-22 09:56:05.051448947 +0000 UTC m=+0.039176685 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 04:56:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 04:56:05 np0005591760 nova_compute[248045]: 2026-01-22 09:56:05.344 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c12a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:05.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:06.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:06 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c12a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:07.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:07.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:07.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:07.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c12a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 142 op/s
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:56:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:56:07 np0005591760 nova_compute[248045]: 2026-01-22 09:56:07.797 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:07.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:08 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c005060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c005060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 105 op/s
Jan 22 04:56:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c23a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:56:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:09.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:56:10 np0005591760 podman[257536]: 2026-01-22 09:56:10.064517899 +0000 UTC m=+0.058151296 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:56:10 np0005591760 nova_compute[248045]: 2026-01-22 09:56:10.344 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:10 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 22 04:56:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.5 MiB/s wr, 73 op/s
Jan 22 04:56:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:56:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:56:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:11 np0005591760 podman[257719]: 2026-01-22 09:56:11.932384414 +0000 UTC m=+0.029865006 container create 28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bartik, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:56:11 np0005591760 systemd[1]: Started libpod-conmon-28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1.scope.
Jan 22 04:56:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:11.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:11 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:56:11 np0005591760 podman[257719]: 2026-01-22 09:56:11.990189978 +0000 UTC m=+0.087670570 container init 28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bartik, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 04:56:11 np0005591760 podman[257719]: 2026-01-22 09:56:11.995016969 +0000 UTC m=+0.092497562 container start 28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:56:11 np0005591760 podman[257719]: 2026-01-22 09:56:11.996150708 +0000 UTC m=+0.093631300 container attach 28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bartik, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:56:11 np0005591760 intelligent_bartik[257733]: 167 167
Jan 22 04:56:11 np0005591760 systemd[1]: libpod-28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1.scope: Deactivated successfully.
Jan 22 04:56:11 np0005591760 podman[257719]: 2026-01-22 09:56:11.99914845 +0000 UTC m=+0.096629042 container died 28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bartik, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:56:12 np0005591760 systemd[1]: var-lib-containers-storage-overlay-adc620cbba510f1287f9d34885613a4519fc84a3abedda5b686ee5aef09b7ac9-merged.mount: Deactivated successfully.
Jan 22 04:56:12 np0005591760 podman[257719]: 2026-01-22 09:56:11.919323886 +0000 UTC m=+0.016804498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:56:12 np0005591760 podman[257719]: 2026-01-22 09:56:12.020924943 +0000 UTC m=+0.118405536 container remove 28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 04:56:12 np0005591760 systemd[1]: libpod-conmon-28f812af8249bf8054d2dd748b466507f893dbb7569ae296b6108169c883ecb1.scope: Deactivated successfully.
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.147270723 +0000 UTC m=+0.031257824 container create 541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:56:12 np0005591760 systemd[1]: Started libpod-conmon-541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6.scope.
Jan 22 04:56:12 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:56:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002c4c984878a9570382b4d231715d6dfffdde83c749a8e75d4ca120e651e365/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002c4c984878a9570382b4d231715d6dfffdde83c749a8e75d4ca120e651e365/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002c4c984878a9570382b4d231715d6dfffdde83c749a8e75d4ca120e651e365/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002c4c984878a9570382b4d231715d6dfffdde83c749a8e75d4ca120e651e365/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:12 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002c4c984878a9570382b4d231715d6dfffdde83c749a8e75d4ca120e651e365/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.208119215 +0000 UTC m=+0.092106336 container init 541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.214616046 +0000 UTC m=+0.098603147 container start 541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.215866863 +0000 UTC m=+0.099853965 container attach 541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.134057567 +0000 UTC m=+0.018044688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.302 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.316 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:56:12 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:56:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:12 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:56:12 np0005591760 keen_williamson[257769]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:56:12 np0005591760 keen_williamson[257769]: --> All data devices are unavailable
Jan 22 04:56:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:56:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:12.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.507604902 +0000 UTC m=+0.391592003 container died 541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:56:12 np0005591760 systemd[1]: libpod-541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6.scope: Deactivated successfully.
Jan 22 04:56:12 np0005591760 systemd[1]: var-lib-containers-storage-overlay-002c4c984878a9570382b4d231715d6dfffdde83c749a8e75d4ca120e651e365-merged.mount: Deactivated successfully.
Jan 22 04:56:12 np0005591760 podman[257755]: 2026-01-22 09:56:12.534676951 +0000 UTC m=+0.418664052 container remove 541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 04:56:12 np0005591760 systemd[1]: libpod-conmon-541271b5529e29c8423fc9d3f1f15f79b1c758e9b260023f32e6524bdfc973f6.scope: Deactivated successfully.
Jan 22 04:56:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:12 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c23a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:12 np0005591760 nova_compute[248045]: 2026-01-22 09:56:12.798 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:12 np0005591760 podman[257901]: 2026-01-22 09:56:12.963518372 +0000 UTC m=+0.029707570 container create e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:56:12 np0005591760 systemd[1]: Started libpod-conmon-e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2.scope.
Jan 22 04:56:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:56:13 np0005591760 podman[257901]: 2026-01-22 09:56:13.024316108 +0000 UTC m=+0.090505296 container init e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:56:13 np0005591760 podman[257901]: 2026-01-22 09:56:13.029338879 +0000 UTC m=+0.095528067 container start e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_villani, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:56:13 np0005591760 podman[257901]: 2026-01-22 09:56:13.030524726 +0000 UTC m=+0.096713914 container attach e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 22 04:56:13 np0005591760 ecstatic_villani[257914]: 167 167
Jan 22 04:56:13 np0005591760 systemd[1]: libpod-e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2.scope: Deactivated successfully.
Jan 22 04:56:13 np0005591760 podman[257901]: 2026-01-22 09:56:13.03382067 +0000 UTC m=+0.100009859 container died e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:56:13 np0005591760 podman[257901]: 2026-01-22 09:56:12.952122172 +0000 UTC m=+0.018311381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:56:13 np0005591760 systemd[1]: var-lib-containers-storage-overlay-bda1fb683b5c8532dca48348c9f9c5e39747f1df6f931aa7df2248614869060b-merged.mount: Deactivated successfully.
Jan 22 04:56:13 np0005591760 podman[257901]: 2026-01-22 09:56:13.057635999 +0000 UTC m=+0.123825177 container remove e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_villani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:56:13 np0005591760 systemd[1]: libpod-conmon-e7862c8cda329330d04b2c69ced85a2328829002ab4fe2729caae9d7527f52e2.scope: Deactivated successfully.
Jan 22 04:56:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c005d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:13 np0005591760 podman[257936]: 2026-01-22 09:56:13.179026363 +0000 UTC m=+0.029268202 container create 2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:56:13 np0005591760 systemd[1]: Started libpod-conmon-2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233.scope.
Jan 22 04:56:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:56:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b58eacd44d3abd39cbb47efc703661fe91f354e052cfea9200ccab07618d040/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b58eacd44d3abd39cbb47efc703661fe91f354e052cfea9200ccab07618d040/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b58eacd44d3abd39cbb47efc703661fe91f354e052cfea9200ccab07618d040/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:13 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b58eacd44d3abd39cbb47efc703661fe91f354e052cfea9200ccab07618d040/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:13 np0005591760 podman[257936]: 2026-01-22 09:56:13.24199341 +0000 UTC m=+0.092235248 container init 2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:56:13 np0005591760 podman[257936]: 2026-01-22 09:56:13.247772517 +0000 UTC m=+0.098014355 container start 2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 04:56:13 np0005591760 podman[257936]: 2026-01-22 09:56:13.249083058 +0000 UTC m=+0.099324916 container attach 2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:56:13 np0005591760 podman[257936]: 2026-01-22 09:56:13.166986169 +0000 UTC m=+0.017228027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:56:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]: {
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:    "0": [
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:        {
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "devices": [
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "/dev/loop3"
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            ],
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "lv_name": "ceph_lv0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "lv_size": "21470642176",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "name": "ceph_lv0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "tags": {
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.cluster_name": "ceph",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.crush_device_class": "",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.encrypted": "0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.osd_id": "0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.type": "block",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.vdo": "0",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:                "ceph.with_tpm": "0"
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            },
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "type": "block",
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:            "vg_name": "ceph_vg0"
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:        }
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]:    ]
Jan 22 04:56:13 np0005591760 wonderful_wescoff[257949]: }
Jan 22 04:56:13 np0005591760 systemd[1]: libpod-2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233.scope: Deactivated successfully.
Jan 22 04:56:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.5 MiB/s wr, 73 op/s
Jan 22 04:56:13 np0005591760 podman[257958]: 2026-01-22 09:56:13.512751596 +0000 UTC m=+0.017409760 container died 2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:56:13 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2b58eacd44d3abd39cbb47efc703661fe91f354e052cfea9200ccab07618d040-merged.mount: Deactivated successfully.
Jan 22 04:56:13 np0005591760 podman[257958]: 2026-01-22 09:56:13.531370885 +0000 UTC m=+0.036029051 container remove 2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 04:56:13 np0005591760 systemd[1]: libpod-conmon-2534e20cffa8f3d881589b1399f5a5dd07c2b735887f5e04dc13fdbb47b46233.scope: Deactivated successfully.
Jan 22 04:56:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c0068d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:13 np0005591760 podman[258048]: 2026-01-22 09:56:13.955647079 +0000 UTC m=+0.030232219 container create 4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_villani, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:56:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:13.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:13 np0005591760 systemd[1]: Started libpod-conmon-4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928.scope.
Jan 22 04:56:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:56:14 np0005591760 podman[258048]: 2026-01-22 09:56:14.008259189 +0000 UTC m=+0.082844350 container init 4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 04:56:14 np0005591760 podman[258048]: 2026-01-22 09:56:14.013221676 +0000 UTC m=+0.087806817 container start 4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:56:14 np0005591760 podman[258048]: 2026-01-22 09:56:14.014369461 +0000 UTC m=+0.088954622 container attach 4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_villani, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:56:14 np0005591760 clever_villani[258062]: 167 167
Jan 22 04:56:14 np0005591760 systemd[1]: libpod-4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928.scope: Deactivated successfully.
Jan 22 04:56:14 np0005591760 podman[258048]: 2026-01-22 09:56:14.017610172 +0000 UTC m=+0.092195313 container died 4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_villani, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:56:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-07e0f525eb6e79e5031349f0f7136d5e9b823731df9870d35b48872292cbc975-merged.mount: Deactivated successfully.
Jan 22 04:56:14 np0005591760 podman[258048]: 2026-01-22 09:56:14.036408029 +0000 UTC m=+0.110993161 container remove 4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:56:14 np0005591760 podman[258048]: 2026-01-22 09:56:13.943067608 +0000 UTC m=+0.017652749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:56:14 np0005591760 systemd[1]: libpod-conmon-4557534591adb7c5b1b8f4d5f8b2b6b7d7676db84c10f31b013b7e9099e81928.scope: Deactivated successfully.
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.157640475 +0000 UTC m=+0.029742246 container create bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 04:56:14 np0005591760 systemd[1]: Started libpod-conmon-bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed.scope.
Jan 22 04:56:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:56:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68b72a8d1e13bb972315f51da4f0c8975d5117fc282827f70b5c6c74dfee086/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68b72a8d1e13bb972315f51da4f0c8975d5117fc282827f70b5c6c74dfee086/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68b72a8d1e13bb972315f51da4f0c8975d5117fc282827f70b5c6c74dfee086/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68b72a8d1e13bb972315f51da4f0c8975d5117fc282827f70b5c6c74dfee086/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.215630968 +0000 UTC m=+0.087732738 container init bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_benz, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.224004297 +0000 UTC m=+0.096106057 container start bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.226945714 +0000 UTC m=+0.099047494 container attach bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.146386063 +0000 UTC m=+0.018487843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.318 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.318 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.318 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:56:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:14.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:14 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:56:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336941131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.673 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:56:14 np0005591760 eloquent_benz[258097]: {}
Jan 22 04:56:14 np0005591760 lvm[258197]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:56:14 np0005591760 lvm[258197]: VG ceph_vg0 finished
Jan 22 04:56:14 np0005591760 systemd[1]: libpod-bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed.scope: Deactivated successfully.
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.740948774 +0000 UTC m=+0.613050534 container died bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:56:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d68b72a8d1e13bb972315f51da4f0c8975d5117fc282827f70b5c6c74dfee086-merged.mount: Deactivated successfully.
Jan 22 04:56:14 np0005591760 podman[258084]: 2026-01-22 09:56:14.768091177 +0000 UTC m=+0.640192936 container remove bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_benz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:56:14 np0005591760 systemd[1]: libpod-conmon-bfb7ee2b4b10c5234b821d5eb7d98ca5fade9aecccfc2fb90ca043f26ec509ed.scope: Deactivated successfully.
Jan 22 04:56:14 np0005591760 ovn_controller[154073]: 2026-01-22T09:56:14Z|00071|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 04:56:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:56:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:56:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.971 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.971 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=59.94289016723633GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.972 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:56:14 np0005591760 nova_compute[248045]: 2026-01-22 09:56:14.972 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.020 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.021 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:56:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:56:15.076 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.076 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:56:15.077 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.082 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:56:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.345 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:56:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852462401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.424 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.428 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.441 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.454 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:56:15 np0005591760 nova_compute[248045]: 2026-01-22 09:56:15.455 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:56:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Jan 22 04:56:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:56:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c006690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:15.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:16 np0005591760 nova_compute[248045]: 2026-01-22 09:56:16.455 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:16 np0005591760 nova_compute[248045]: 2026-01-22 09:56:16.455 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:56:16 np0005591760 nova_compute[248045]: 2026-01-22 09:56:16.456 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:56:16 np0005591760 nova_compute[248045]: 2026-01-22 09:56:16.469 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:56:16 np0005591760 nova_compute[248045]: 2026-01-22 09:56:16.469 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:16 np0005591760 nova_compute[248045]: 2026-01-22 09:56:16.470 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:56:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:16 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c006690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:17.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:17.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:17.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:17.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c23a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 123 KiB/s wr, 26 op/s
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:56:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:56:17 np0005591760 nova_compute[248045]: 2026-01-22 09:56:17.800 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:18 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:18.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:18.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:18.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:18.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:19 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:56:19.079 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:56:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:56:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 123 KiB/s wr, 26 op/s
Jan 22 04:56:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:19.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095620 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:56:20 np0005591760 nova_compute[248045]: 2026-01-22 09:56:20.346 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:20 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0940c23a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c006a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 197 B/s rd, 19 KiB/s wr, 0 op/s
Jan 22 04:56:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c0073a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:21.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:22 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:22 np0005591760 nova_compute[248045]: 2026-01-22 09:56:22.801 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 16 KiB/s wr, 0 op/s
Jan 22 04:56:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c006a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:23.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:24 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c0073a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:25 np0005591760 nova_compute[248045]: 2026-01-22 09:56:25.347 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v728: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 18 KiB/s wr, 1 op/s
Jan 22 04:56:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:25.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:26 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c006a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:27.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:27.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:27.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:27.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c0073a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 6.7 KiB/s wr, 0 op/s
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:27] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:56:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:27] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:56:27 np0005591760 nova_compute[248045]: 2026-01-22 09:56:27.803 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:28 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:28.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:28.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:28.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:28.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:28 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:56:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c006bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 6.7 KiB/s wr, 0 op/s
Jan 22 04:56:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c0073a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:30 np0005591760 nova_compute[248045]: 2026-01-22 09:56:30.349 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:30 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 22 04:56:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c006c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:56:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:56:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09c0073a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:32 np0005591760 nova_compute[248045]: 2026-01-22 09:56:32.804 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900056f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 22 04:56:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:34 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb08c006c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:35 np0005591760 nova_compute[248045]: 2026-01-22 09:56:35.351 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 22 04:56:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:36.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:36 np0005591760 podman[258305]: 2026-01-22 09:56:36.057697085 +0000 UTC m=+0.047363433 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:56:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:37.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:37.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:37.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:37.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:37] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:56:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:37] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:56:37 np0005591760 nova_compute[248045]: 2026-01-22 09:56:37.806 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:38.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:38 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:38.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:38.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:38.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:38.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 22 04:56:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=cleanup t=2026-01-22T09:56:39.964718865Z level=info msg="Completed cleanup jobs" duration=3.143719ms
Jan 22 04:56:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:40.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafana.update.checker t=2026-01-22T09:56:40.060028288Z level=info msg="Update check succeeded" duration=39.215268ms
Jan 22 04:56:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=plugins.update.checker t=2026-01-22T09:56:40.060885755Z level=info msg="Update check succeeded" duration=39.633497ms
Jan 22 04:56:40 np0005591760 nova_compute[248045]: 2026-01-22 09:56:40.352 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:40 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:41 np0005591760 podman[258328]: 2026-01-22 09:56:41.069522301 +0000 UTC m=+0.059460747 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:56:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 22 04:56:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:56:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:42.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a4004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:42 np0005591760 nova_compute[248045]: 2026-01-22 09:56:42.808 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Jan 22 04:56:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:44.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:44 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a4005260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:45 np0005591760 nova_compute[248045]: 2026-01-22 09:56:45.355 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 22 04:56:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:46.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:46.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:46 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:47.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:47.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:47.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:56:47.315 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:56:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:56:47.315 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:56:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:56:47.315 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:56:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:47] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:56:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:47] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 04:56:47 np0005591760 nova_compute[248045]: 2026-01-22 09:56:47.811 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a4005260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:48.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:48.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:48.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:48.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:48.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:56:49
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['images', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', '.nfs', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:56:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:56:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80043c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:50 np0005591760 nova_compute[248045]: 2026-01-22 09:56:50.357 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:50 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80043c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 04:56:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981974291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 04:56:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 04:56:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981974291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 04:56:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 22 04:56:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:56:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:52.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:52 np0005591760 nova_compute[248045]: 2026-01-22 09:56:52.813 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 04:56:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80043c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:54.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:54 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:55 np0005591760 nova_compute[248045]: 2026-01-22 09:56:55.359 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 22 04:56:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:56.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:56:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:56:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:56 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80043c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:57.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:57.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:57.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:57.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 1 op/s
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:57] "GET /metrics HTTP/1.1" 200 48607 "" "Prometheus/2.51.0"
Jan 22 04:56:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:56:57] "GET /metrics HTTP/1.1" 200 48607 "" "Prometheus/2.51.0"
Jan 22 04:56:57 np0005591760 nova_compute[248045]: 2026-01-22 09:56:57.815 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:56:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:56:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:56:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:56:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:56:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:56:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:56:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:58 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:58.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:58.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:58.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:56:58.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:56:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80058b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 1 op/s
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015207717775281631 of space, bias 1.0, pg target 0.45623153325844895 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:56:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:56:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:56:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:00.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:00 np0005591760 nova_compute[248045]: 2026-01-22 09:57:00.360 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:00.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:00 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 19 KiB/s wr, 29 op/s
Jan 22 04:57:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80058b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:57:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:02.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:02 np0005591760 nova_compute[248045]: 2026-01-22 09:57:02.817 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.9 KiB/s wr, 29 op/s
Jan 22 04:57:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900057c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:04.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:04.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:04 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80058b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:05 np0005591760 nova_compute[248045]: 2026-01-22 09:57:05.361 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 10 KiB/s wr, 57 op/s
Jan 22 04:57:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:05 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:06.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:06 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:06 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:07 np0005591760 podman[258403]: 2026-01-22 09:57:07.041572364 +0000 UTC m=+0.036212486 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:07.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:07.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:07.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:07.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 8.0 KiB/s wr, 56 op/s
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:57:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:57:07 np0005591760 nova_compute[248045]: 2026-01-22 09:57:07.819 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:07 np0005591760 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 04:57:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:07 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80058b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:08.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:08.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:08 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:08.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:08.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:08.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:08.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 8.0 KiB/s wr, 56 op/s
Jan 22 04:57:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:09 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:10.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:10 np0005591760 nova_compute[248045]: 2026-01-22 09:57:10.363 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:10.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:10 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0a80058b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 8.0 KiB/s wr, 56 op/s
Jan 22 04:57:11 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:11 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:12 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:57:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:12.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:12 np0005591760 podman[258426]: 2026-01-22 09:57:12.063518544 +0000 UTC m=+0.057459031 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 04:57:12 np0005591760 nova_compute[248045]: 2026-01-22 09:57:12.309 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:12.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:12 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:12 np0005591760 nova_compute[248045]: 2026-01-22 09:57:12.821 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:13 np0005591760 nova_compute[248045]: 2026-01-22 09:57:13.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:13 np0005591760 nova_compute[248045]: 2026-01-22 09:57:13.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:13 np0005591760 nova_compute[248045]: 2026-01-22 09:57:13.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:57:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 22 04:57:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:13 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:14.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:14 np0005591760 nova_compute[248045]: 2026-01-22 09:57:14.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:14.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:14 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b00027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.321 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.322 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.322 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.322 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.322 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.365 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 814 B/s rd, 0 op/s
Jan 22 04:57:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1017 B/s rd, 0 op/s
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497461945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.688 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:15 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.898 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.898 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4619MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.899 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.899 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.944 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.944 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:57:15 np0005591760 nova_compute[248045]: 2026-01-22 09:57:15.956 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:57:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:15 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:16 np0005591760 podman[258662]: 2026-01-22 09:57:16.006743029 +0000 UTC m=+0.029718481 container create fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sinoussi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:57:16 np0005591760 systemd[1]: Started libpod-conmon-fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8.scope.
Jan 22 04:57:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:16.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:57:16 np0005591760 podman[258662]: 2026-01-22 09:57:16.067136993 +0000 UTC m=+0.090112445 container init fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:57:16 np0005591760 podman[258662]: 2026-01-22 09:57:16.072516618 +0000 UTC m=+0.095492070 container start fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 04:57:16 np0005591760 podman[258662]: 2026-01-22 09:57:16.073762618 +0000 UTC m=+0.096738060 container attach fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sinoussi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 04:57:16 np0005591760 optimistic_sinoussi[258674]: 167 167
Jan 22 04:57:16 np0005591760 systemd[1]: libpod-fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8.scope: Deactivated successfully.
Jan 22 04:57:16 np0005591760 podman[258662]: 2026-01-22 09:57:15.994725669 +0000 UTC m=+0.017701131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:57:16 np0005591760 podman[258698]: 2026-01-22 09:57:16.106192921 +0000 UTC m=+0.020169705 container died fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:57:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fd3fa611566e4b80dce7049a6adbfd43b7232ee7238c91aec9430295e5f263a1-merged.mount: Deactivated successfully.
Jan 22 04:57:16 np0005591760 podman[258698]: 2026-01-22 09:57:16.124931988 +0000 UTC m=+0.038908771 container remove fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 04:57:16 np0005591760 systemd[1]: libpod-conmon-fed013fbd342f5c40c806672e5a6deeea4622091fe349f0a01070916de35a4d8.scope: Deactivated successfully.
Jan 22 04:57:16 np0005591760 podman[258716]: 2026-01-22 09:57:16.252210625 +0000 UTC m=+0.030899377 container create d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:57:16 np0005591760 systemd[1]: Started libpod-conmon-d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939.scope.
Jan 22 04:57:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:57:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f8cf6cbdb6b8be623b0756b62f5779077b6cc1594ef3f8262159aead3dfdb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f8cf6cbdb6b8be623b0756b62f5779077b6cc1594ef3f8262159aead3dfdb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f8cf6cbdb6b8be623b0756b62f5779077b6cc1594ef3f8262159aead3dfdb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f8cf6cbdb6b8be623b0756b62f5779077b6cc1594ef3f8262159aead3dfdb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f8cf6cbdb6b8be623b0756b62f5779077b6cc1594ef3f8262159aead3dfdb0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:57:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187229283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:57:16 np0005591760 podman[258716]: 2026-01-22 09:57:16.316095071 +0000 UTC m=+0.094799933 container init d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:57:16 np0005591760 podman[258716]: 2026-01-22 09:57:16.32108481 +0000 UTC m=+0.099773552 container start d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:57:16 np0005591760 podman[258716]: 2026-01-22 09:57:16.322119682 +0000 UTC m=+0.100808424 container attach d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goodall, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 04:57:16 np0005591760 nova_compute[248045]: 2026-01-22 09:57:16.331 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:57:16 np0005591760 podman[258716]: 2026-01-22 09:57:16.240196941 +0000 UTC m=+0.018885703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:57:16 np0005591760 nova_compute[248045]: 2026-01-22 09:57:16.336 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:57:16 np0005591760 nova_compute[248045]: 2026-01-22 09:57:16.348 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:57:16 np0005591760 nova_compute[248045]: 2026-01-22 09:57:16.349 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:57:16 np0005591760 nova_compute[248045]: 2026-01-22 09:57:16.349 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:57:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:16 np0005591760 amazing_goodall[258729]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:57:16 np0005591760 amazing_goodall[258729]: --> All data devices are unavailable
Jan 22 04:57:16 np0005591760 systemd[1]: libpod-d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939.scope: Deactivated successfully.
Jan 22 04:57:16 np0005591760 podman[258746]: 2026-01-22 09:57:16.615102035 +0000 UTC m=+0.017075981 container died d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goodall, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:57:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:16 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay-82f8cf6cbdb6b8be623b0756b62f5779077b6cc1594ef3f8262159aead3dfdb0-merged.mount: Deactivated successfully.
Jan 22 04:57:16 np0005591760 podman[258746]: 2026-01-22 09:57:16.639576186 +0000 UTC m=+0.041550112 container remove d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goodall, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:57:16 np0005591760 systemd[1]: libpod-conmon-d7608c23fcc5cd1a1a9afbf7b131c7c0af1693da126255c948d5b924feadc939.scope: Deactivated successfully.
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:17.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.05721893 +0000 UTC m=+0.027904509 container create 4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:17.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:17.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:17.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.069 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:17 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:57:17.068 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:57:17 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:57:17.069 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:57:17 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:57:17.070 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:57:17 np0005591760 systemd[1]: Started libpod-conmon-4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31.scope.
Jan 22 04:57:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.110724375 +0000 UTC m=+0.081409974 container init 4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_shtern, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.115551788 +0000 UTC m=+0.086237367 container start 4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.116745649 +0000 UTC m=+0.087431228 container attach 4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_shtern, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:57:17 np0005591760 sweet_shtern[258853]: 167 167
Jan 22 04:57:17 np0005591760 systemd[1]: libpod-4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31.scope: Deactivated successfully.
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.119702965 +0000 UTC m=+0.090388544 container died 4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:57:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3772f62b314274f7b4bd2186c1fd459c7fa9814996af84632aed9bb002bc2dcf-merged.mount: Deactivated successfully.
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.140411917 +0000 UTC m=+0.111097496 container remove 4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:57:17 np0005591760 podman[258839]: 2026-01-22 09:57:17.04549871 +0000 UTC m=+0.016184309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b00027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:17 np0005591760 systemd[1]: libpod-conmon-4807523dbeb16967d517cdabd5a5eb6226f73d81d9beaa5965891ff1b9a2af31.scope: Deactivated successfully.
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.261359396 +0000 UTC m=+0.027741443 container create cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swirles, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:57:17 np0005591760 systemd[1]: Started libpod-conmon-cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef.scope.
Jan 22 04:57:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:57:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2363ea7fac6f1f61ce6ed905d91a48958bacc39c19fda8f7dbf26f9db656eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2363ea7fac6f1f61ce6ed905d91a48958bacc39c19fda8f7dbf26f9db656eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2363ea7fac6f1f61ce6ed905d91a48958bacc39c19fda8f7dbf26f9db656eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:17 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2363ea7fac6f1f61ce6ed905d91a48958bacc39c19fda8f7dbf26f9db656eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.316735779 +0000 UTC m=+0.083117826 container init cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.321569564 +0000 UTC m=+0.087951611 container start cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swirles, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.322966537 +0000 UTC m=+0.089348584 container attach cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swirles, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.250664759 +0000 UTC m=+0.017046816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.350 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.350 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.351 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.362 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.362 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.362 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.363 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]: {
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:    "0": [
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:        {
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "devices": [
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "/dev/loop3"
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            ],
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "lv_name": "ceph_lv0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "lv_size": "21470642176",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "name": "ceph_lv0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "tags": {
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.cluster_name": "ceph",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.crush_device_class": "",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.encrypted": "0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.osd_id": "0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.type": "block",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.vdo": "0",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:                "ceph.with_tpm": "0"
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            },
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "type": "block",
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:            "vg_name": "ceph_vg0"
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:        }
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]:    ]
Jan 22 04:57:17 np0005591760 wonderful_swirles[258887]: }
Jan 22 04:57:17 np0005591760 systemd[1]: libpod-cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef.scope: Deactivated successfully.
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.556704625 +0000 UTC m=+0.323086672 container died cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:57:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2d2363ea7fac6f1f61ce6ed905d91a48958bacc39c19fda8f7dbf26f9db656eb-merged.mount: Deactivated successfully.
Jan 22 04:57:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1017 B/s rd, 0 op/s
Jan 22 04:57:17 np0005591760 podman[258874]: 2026-01-22 09:57:17.579963634 +0000 UTC m=+0.346345681 container remove cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:57:17 np0005591760 systemd[1]: libpod-conmon-cf67a2c5256c8b72d8a1f247f3d0efcce1ee0ce1b2ca17d1b8ef305e95b956ef.scope: Deactivated successfully.
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:57:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:57:17 np0005591760 nova_compute[248045]: 2026-01-22 09:57:17.823 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:17 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:18.004805063 +0000 UTC m=+0.028450110 container create 82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jepsen, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:57:18 np0005591760 systemd[1]: Started libpod-conmon-82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04.scope.
Jan 22 04:57:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:57:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:18.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:18.066433786 +0000 UTC m=+0.090078852 container init 82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:18.071676772 +0000 UTC m=+0.095321819 container start 82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jepsen, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:18.072884199 +0000 UTC m=+0.096529245 container attach 82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jepsen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:57:18 np0005591760 dazzling_jepsen[258998]: 167 167
Jan 22 04:57:18 np0005591760 systemd[1]: libpod-82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04.scope: Deactivated successfully.
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:18.076315519 +0000 UTC m=+0.099960566 container died 82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Jan 22 04:57:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f445eb3a0c9325f802aa11166d5f1ce53700b394fbfb8d62b4296a6fcc62b7d0-merged.mount: Deactivated successfully.
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:17.993663543 +0000 UTC m=+0.017308590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:57:18 np0005591760 podman[258986]: 2026-01-22 09:57:18.09568226 +0000 UTC m=+0.119327306 container remove 82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:57:18 np0005591760 systemd[1]: libpod-conmon-82eb8151e3e2ae1beebe3913c2f5a3a8ce835796ae44c9db9e3255957d95ed04.scope: Deactivated successfully.
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.219940151 +0000 UTC m=+0.029095245 container create 388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:57:18 np0005591760 systemd[1]: Started libpod-conmon-388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281.scope.
Jan 22 04:57:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:57:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030614378e97d4e2c474d936a60a530e55a94e72279f94b34ad03ce6556527a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030614378e97d4e2c474d936a60a530e55a94e72279f94b34ad03ce6556527a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030614378e97d4e2c474d936a60a530e55a94e72279f94b34ad03ce6556527a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030614378e97d4e2c474d936a60a530e55a94e72279f94b34ad03ce6556527a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.278850508 +0000 UTC m=+0.088005612 container init 388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.283757942 +0000 UTC m=+0.092913036 container start 388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.284924753 +0000 UTC m=+0.094079847 container attach 388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_gould, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.208014794 +0000 UTC m=+0.017169887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:57:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:18.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:18 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:18 np0005591760 happy_gould[259033]: {}
Jan 22 04:57:18 np0005591760 lvm[259110]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:57:18 np0005591760 lvm[259110]: VG ceph_vg0 finished
Jan 22 04:57:18 np0005591760 systemd[1]: libpod-388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281.scope: Deactivated successfully.
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.764499761 +0000 UTC m=+0.573654865 container died 388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:57:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-030614378e97d4e2c474d936a60a530e55a94e72279f94b34ad03ce6556527a9-merged.mount: Deactivated successfully.
Jan 22 04:57:18 np0005591760 podman[259020]: 2026-01-22 09:57:18.787954791 +0000 UTC m=+0.597109885 container remove 388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 22 04:57:18 np0005591760 systemd[1]: libpod-conmon-388897e8af4e06feea81b527bbc1b274704a25bee1f09344b71c6225dad39281.scope: Deactivated successfully.
Jan 22 04:57:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:57:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:57:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:18.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:18.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:18.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:18.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:57:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/347232462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:57:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:57:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 635 B/s rd, 0 op/s
Jan 22 04:57:19 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:19 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:57:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:19 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b00027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:20.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:20 np0005591760 nova_compute[248045]: 2026-01-22 09:57:20.366 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:20.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:20 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0880012d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1017 B/s rd, 0 op/s
Jan 22 04:57:21 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:21 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:22 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:57:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:22.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:22.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:22 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:22 np0005591760 nova_compute[248045]: 2026-01-22 09:57:22.824 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b00040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 381 B/s rd, 0 op/s
Jan 22 04:57:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:23 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0ac0026e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:24.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:24.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:24 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:25 np0005591760 nova_compute[248045]: 2026-01-22 09:57:25.369 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 22 04:57:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:25 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b00040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:26.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:26.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:26 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0ac010460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:27.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:27.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:27.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:27.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:27] "GET /metrics HTTP/1.1" 200 48587 "" "Prometheus/2.51.0"
Jan 22 04:57:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:27] "GET /metrics HTTP/1.1" 200 48587 "" "Prometheus/2.51.0"
Jan 22 04:57:27 np0005591760 nova_compute[248045]: 2026-01-22 09:57:27.826 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:27 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:28.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:28.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:28 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0004dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:28.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:28.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:28.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:28.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0ac010460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 04:57:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:29 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:30.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:30 np0005591760 nova_compute[248045]: 2026-01-22 09:57:30.370 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:30 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0004dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 04:57:31 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:31 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0ac010460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:57:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:32.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:32.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:32 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:32 np0005591760 nova_compute[248045]: 2026-01-22 09:57:32.827 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 04:57:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:33 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:34.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:34.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:34 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0ac010460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:35 np0005591760 nova_compute[248045]: 2026-01-22 09:57:35.371 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 22 04:57:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:35 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:36 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:37.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:37.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:37.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:37.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0ac01c420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:37] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:57:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:37] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:57:37 np0005591760 nova_compute[248045]: 2026-01-22 09:57:37.829 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:37 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:38 np0005591760 podman[259192]: 2026-01-22 09:57:38.047334405 +0000 UTC m=+0.036474170 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:57:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:57:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:38.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:57:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:38.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:38 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090005d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:38.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:38.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:38.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:38.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 04:57:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:39 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:40.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:40 np0005591760 nova_compute[248045]: 2026-01-22 09:57:40.372 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:40.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:40 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 04:57:41 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:41 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:57:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:42.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:42.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:42 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:42 np0005591760 nova_compute[248045]: 2026-01-22 09:57:42.830 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:43 np0005591760 podman[259215]: 2026-01-22 09:57:43.064091198 +0000 UTC m=+0.057261298 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 04:57:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b80026e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 04:57:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:43 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:44.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:44 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09000d910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09000d910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:45 np0005591760 nova_compute[248045]: 2026-01-22 09:57:45.373 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 22 04:57:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:45 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b80026e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:46.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:46 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:46 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:47.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:57:47.316 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:57:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:57:47.317 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:57:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:57:47.317 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:57:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:47] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:57:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:47] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:57:47 np0005591760 nova_compute[248045]: 2026-01-22 09:57:47.831 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:47 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:48.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:48 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:48.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:48.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:48.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:48.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:57:49
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.nfs']
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:57:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 04:57:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:49 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:50.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:50 np0005591760 nova_compute[248045]: 2026-01-22 09:57:50.374 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:50 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09000d910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 04:57:51 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:51 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:57:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:52.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:52 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:52 np0005591760 nova_compute[248045]: 2026-01-22 09:57:52.833 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 04:57:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:53 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09000d910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:54.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:54.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:54 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:55 np0005591760 nova_compute[248045]: 2026-01-22 09:57:55.375 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 22 04:57:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:55 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:56.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:56 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:56 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:57.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:57] "GET /metrics HTTP/1.1" 200 48611 "" "Prometheus/2.51.0"
Jan 22 04:57:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:57:57] "GET /metrics HTTP/1.1" 200 48611 "" "Prometheus/2.51.0"
Jan 22 04:57:57 np0005591760 nova_compute[248045]: 2026-01-22 09:57:57.834 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:57:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:57 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:57:58.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:57:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:57:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:57:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:57:58.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:57:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:58 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:58.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:58.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:58.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:57:58.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:57:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c003660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:57:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:58:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:57:59 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:58:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:00.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:00 np0005591760 nova_compute[248045]: 2026-01-22 09:58:00.378 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:58:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:58:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:58:00 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:58:01 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:58:01 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b0005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:58:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 04:58:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:58:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b80026e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:58:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:58:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:02.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:58:02 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb09000d910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 22 04:58:02 np0005591760 nova_compute[248045]: 2026-01-22 09:58:02.837 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[254321]: 22/01/2026 09:58:03 : epoch 6971f3a0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0b4073420 fd 48 proxy ignored for local
Jan 22 04:58:03 np0005591760 kernel: ganesha.nfsd[259211]: segfault at 50 ip 00007fb10901832e sp 00007fb071ffa210 error 4 in libntirpc.so.5.8[7fb108ffd000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 22 04:58:03 np0005591760 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 22 04:58:03 np0005591760 systemd[1]: Started Process Core Dump (PID 259283/UID 0).
Jan 22 04:58:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 04:58:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:04 np0005591760 systemd-coredump[259284]: Process 254325 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 69:#012#0  0x00007fb10901832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007fb109022900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Jan 22 04:58:04 np0005591760 systemd[1]: systemd-coredump@4-259283-0.service: Deactivated successfully.
Jan 22 04:58:04 np0005591760 systemd[1]: systemd-coredump@4-259283-0.service: Consumed 1.009s CPU time.
Jan 22 04:58:04 np0005591760 podman[259290]: 2026-01-22 09:58:04.355680106 +0000 UTC m=+0.017505031 container died b976a0d59c790c2e84b0e0dc159b49f2973a1ca3667d6bf61038f99b8bf78f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:58:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3e0026ed777c6bd7a86b617509426b6e5727c15925a14b4ba7e242bbe790b32c-merged.mount: Deactivated successfully.
Jan 22 04:58:04 np0005591760 podman[259290]: 2026-01-22 09:58:04.378626015 +0000 UTC m=+0.040450919 container remove b976a0d59c790c2e84b0e0dc159b49f2973a1ca3667d6bf61038f99b8bf78f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:58:04 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Main process exited, code=exited, status=139/n/a
Jan 22 04:58:04 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Failed with result 'exit-code'.
Jan 22 04:58:04 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.041s CPU time.
Jan 22 04:58:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:04.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:05 np0005591760 nova_compute[248045]: 2026-01-22 09:58:05.380 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 04:58:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:06.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:06 np0005591760 nova_compute[248045]: 2026-01-22 09:58:06.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:07.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:07.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:07.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 22 04:58:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:07] "GET /metrics HTTP/1.1" 200 48590 "" "Prometheus/2.51.0"
Jan 22 04:58:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:07] "GET /metrics HTTP/1.1" 200 48590 "" "Prometheus/2.51.0"
Jan 22 04:58:07 np0005591760 nova_compute[248045]: 2026-01-22 09:58:07.838 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:08.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:58:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:08.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:58:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:08.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:08.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:08.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:08.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:09 np0005591760 podman[259329]: 2026-01-22 09:58:09.048465816 +0000 UTC m=+0.040412252 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 04:58:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [WARNING] 021/095809 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 22 04:58:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [NOTICE] 021/095809 (4) : haproxy version is 2.3.17-d1c9119
Jan 22 04:58:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [NOTICE] 021/095809 (4) : path to executable is /usr/local/sbin/haproxy
Jan 22 04:58:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-nfs-cephfs-compute-0-dnpemq[102679]: [ALERT] 021/095809 (4) : backend 'backend' has no server available!
Jan 22 04:58:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 22 04:58:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:10.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:10 np0005591760 nova_compute[248045]: 2026-01-22 09:58:10.311 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:10 np0005591760 nova_compute[248045]: 2026-01-22 09:58:10.312 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 04:58:10 np0005591760 nova_compute[248045]: 2026-01-22 09:58:10.382 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 04:58:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:12 np0005591760 nova_compute[248045]: 2026-01-22 09:58:12.306 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 22 04:58:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2678663726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 04:58:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:12 np0005591760 nova_compute[248045]: 2026-01-22 09:58:12.839 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:13 np0005591760 nova_compute[248045]: 2026-01-22 09:58:13.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:13 np0005591760 nova_compute[248045]: 2026-01-22 09:58:13.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:13 np0005591760 nova_compute[248045]: 2026-01-22 09:58:13.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 04:58:13 np0005591760 nova_compute[248045]: 2026-01-22 09:58:13.309 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.465140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075893465159, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1581, "num_deletes": 255, "total_data_size": 2848306, "memory_usage": 2897752, "flush_reason": "Manual Compaction"}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075893471990, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2786906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22195, "largest_seqno": 23775, "table_properties": {"data_size": 2779808, "index_size": 4041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14985, "raw_average_key_size": 19, "raw_value_size": 2765369, "raw_average_value_size": 3582, "num_data_blocks": 179, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075745, "oldest_key_time": 1769075745, "file_creation_time": 1769075893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 6873 microseconds, and 4257 cpu microseconds.
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.472014) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2786906 bytes OK
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.472024) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.472353) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.472362) EVENT_LOG_v1 {"time_micros": 1769075893472359, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.472369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2841592, prev total WAL file size 2841592, number of live WAL files 2.
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.473102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2721KB)], [47(13MB)]
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075893473149, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 17243864, "oldest_snapshot_seqno": -1}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5794 keys, 17049825 bytes, temperature: kUnknown
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075893509906, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 17049825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17007268, "index_size": 26959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 146266, "raw_average_key_size": 25, "raw_value_size": 16898712, "raw_average_value_size": 2916, "num_data_blocks": 1112, "num_entries": 5794, "num_filter_entries": 5794, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.510049) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 17049825 bytes
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.511109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 468.7 rd, 463.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.8 +0.0 blob) out(16.3 +0.0 blob), read-write-amplify(12.3) write-amplify(6.1) OK, records in: 6318, records dropped: 524 output_compression: NoCompression
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.511121) EVENT_LOG_v1 {"time_micros": 1769075893511116, "job": 24, "event": "compaction_finished", "compaction_time_micros": 36789, "compaction_time_cpu_micros": 24690, "output_level": 6, "num_output_files": 1, "total_output_size": 17049825, "num_input_records": 6318, "num_output_records": 5794, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075893511450, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075893513075, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.473026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.513128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.513130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.513132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.513132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:13.513133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 04:58:14 np0005591760 podman[259376]: 2026-01-22 09:58:14.071100493 +0000 UTC m=+0.057195232 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 04:58:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:14.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:14 np0005591760 nova_compute[248045]: 2026-01-22 09:58:14.304 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:14 np0005591760 nova_compute[248045]: 2026-01-22 09:58:14.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:14 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Scheduled restart job, restart counter is at 5.
Jan 22 04:58:14 np0005591760 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:58:14 np0005591760 systemd[1]: ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2@nfs.cephfs.2.0.compute-0.ylzmiu.service: Consumed 1.041s CPU time.
Jan 22 04:58:14 np0005591760 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2...
Jan 22 04:58:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:14.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:14 np0005591760 podman[259437]: 2026-01-22 09:58:14.650947708 +0000 UTC m=+0.030930100 container create 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:58:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfb53e3f5d559bdd9f3c6916be3df970f0c23739afcead6216472ac31bd5881/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfb53e3f5d559bdd9f3c6916be3df970f0c23739afcead6216472ac31bd5881/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfb53e3f5d559bdd9f3c6916be3df970f0c23739afcead6216472ac31bd5881/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfb53e3f5d559bdd9f3c6916be3df970f0c23739afcead6216472ac31bd5881/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ylzmiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:14 np0005591760 podman[259437]: 2026-01-22 09:58:14.688540902 +0000 UTC m=+0.068523284 container init 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 04:58:14 np0005591760 podman[259437]: 2026-01-22 09:58:14.695651081 +0000 UTC m=+0.075633462 container start 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:58:14 np0005591760 bash[259437]: 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77
Jan 22 04:58:14 np0005591760 podman[259437]: 2026-01-22 09:58:14.63675343 +0000 UTC m=+0.016735831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:14 np0005591760 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ylzmiu for 43df7a30-cf5f-5209-adfd-bf44298b19f2.
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 22 04:58:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:15 np0005591760 nova_compute[248045]: 2026-01-22 09:58:15.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:15 np0005591760 nova_compute[248045]: 2026-01-22 09:58:15.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:58:15 np0005591760 nova_compute[248045]: 2026-01-22 09:58:15.384 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 22 04:58:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:58:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:16.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:58:16 np0005591760 nova_compute[248045]: 2026-01-22 09:58:16.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:17.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:17.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:17.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:17.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.317 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.317 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.317 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:58:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 22 04:58:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:17] "GET /metrics HTTP/1.1" 200 48590 "" "Prometheus/2.51.0"
Jan 22 04:58:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:17] "GET /metrics HTTP/1.1" 200 48590 "" "Prometheus/2.51.0"
Jan 22 04:58:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:58:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401871835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.656 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.830 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.831 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4486MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.831 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.831 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.841 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.926 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.927 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.963 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing inventories for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.975 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating ProviderTree inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.975 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.984 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing aggregate associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 04:58:17 np0005591760 nova_compute[248045]: 2026-01-22 09:58:17.997 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing trait associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, traits: HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,HW_CPU_X86_AVX512VAES,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.010 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:58:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:18.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:18 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:58:18.248 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.248 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:18 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:58:18.249 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:58:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:58:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1625916810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.353 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.356 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.367 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.368 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:58:18 np0005591760 nova_compute[248045]: 2026-01-22 09:58:18.369 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:58:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:18.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:18.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:58:19 np0005591760 nova_compute[248045]: 2026-01-22 09:58:19.368 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:19 np0005591760 nova_compute[248045]: 2026-01-22 09:58:19.368 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:58:19 np0005591760 nova_compute[248045]: 2026-01-22 09:58:19.368 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:58:19 np0005591760 nova_compute[248045]: 2026-01-22 09:58:19.378 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:58:19 np0005591760 nova_compute[248045]: 2026-01-22 09:58:19.378 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:19 np0005591760 nova_compute[248045]: 2026-01-22 09:58:19.378 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 22 04:58:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:20.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:20 np0005591760 nova_compute[248045]: 2026-01-22 09:58:20.387 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 04:58:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:58:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.868380685 +0000 UTC m=+0.036608538 container create 5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:21 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:58:21 np0005591760 systemd[1]: Started libpod-conmon-5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6.scope.
Jan 22 04:58:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.942269316 +0000 UTC m=+0.110497190 container init 5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.948498914 +0000 UTC m=+0.116726768 container start 5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.853015779 +0000 UTC m=+0.021243653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.95070294 +0000 UTC m=+0.118930795 container attach 5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:58:21 np0005591760 competent_roentgen[259716]: 167 167
Jan 22 04:58:21 np0005591760 systemd[1]: libpod-5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6.scope: Deactivated successfully.
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.953614341 +0000 UTC m=+0.121842194 container died 5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 04:58:21 np0005591760 systemd[1]: var-lib-containers-storage-overlay-424f58adceee2e2207996874203c4c114cebc48fab4f490c0b4ab7efa970fad7-merged.mount: Deactivated successfully.
Jan 22 04:58:21 np0005591760 podman[259703]: 2026-01-22 09:58:21.975125497 +0000 UTC m=+0.143353352 container remove 5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 04:58:21 np0005591760 systemd[1]: libpod-conmon-5cafb0250259c7f6024cb670aa4dcd62d87d7033ccf021379f3f6f32895d73b6.scope: Deactivated successfully.
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.118476764 +0000 UTC m=+0.038907232 container create db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:58:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:22 np0005591760 systemd[1]: Started libpod-conmon-db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b.scope.
Jan 22 04:58:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:58:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b364fd04371c87de90c53604d69f88d0dd20494503ba3ed373b2d26a0c22292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b364fd04371c87de90c53604d69f88d0dd20494503ba3ed373b2d26a0c22292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b364fd04371c87de90c53604d69f88d0dd20494503ba3ed373b2d26a0c22292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b364fd04371c87de90c53604d69f88d0dd20494503ba3ed373b2d26a0c22292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b364fd04371c87de90c53604d69f88d0dd20494503ba3ed373b2d26a0c22292/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.197274703 +0000 UTC m=+0.117705192 container init db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.100433708 +0000 UTC m=+0.020864177 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.203834615 +0000 UTC m=+0.124265083 container start db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.205267807 +0000 UTC m=+0.125698296 container attach db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 04:58:22 np0005591760 nice_dhawan[259752]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:58:22 np0005591760 nice_dhawan[259752]: --> All data devices are unavailable
Jan 22 04:58:22 np0005591760 systemd[1]: libpod-db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b.scope: Deactivated successfully.
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.513240162 +0000 UTC m=+0.433670631 container died db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_dhawan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 04:58:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3b364fd04371c87de90c53604d69f88d0dd20494503ba3ed373b2d26a0c22292-merged.mount: Deactivated successfully.
Jan 22 04:58:22 np0005591760 podman[259739]: 2026-01-22 09:58:22.539317318 +0000 UTC m=+0.459747787 container remove db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 04:58:22 np0005591760 systemd[1]: libpod-conmon-db7c7939dcdfcee90c33c5ebad4ddf23725513ad00ed9d7e5bfaecc75611023b.scope: Deactivated successfully.
Jan 22 04:58:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:22 np0005591760 nova_compute[248045]: 2026-01-22 09:58:22.844 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.046960899 +0000 UTC m=+0.029992221 container create 846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_zhukovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:58:23 np0005591760 systemd[1]: Started libpod-conmon-846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2.scope.
Jan 22 04:58:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.10851461 +0000 UTC m=+0.091545931 container init 846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.113683406 +0000 UTC m=+0.096714718 container start 846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.114956918 +0000 UTC m=+0.097988230 container attach 846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 04:58:23 np0005591760 hardcore_zhukovsky[259873]: 167 167
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.119271004 +0000 UTC m=+0.102302326 container died 846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:58:23 np0005591760 systemd[1]: libpod-846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2.scope: Deactivated successfully.
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.03463769 +0000 UTC m=+0.017669032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0457b70ec2793125f6185da7bfc8020af66595392d4a7040139bb848af7bb0da-merged.mount: Deactivated successfully.
Jan 22 04:58:23 np0005591760 podman[259860]: 2026-01-22 09:58:23.16683367 +0000 UTC m=+0.149864991 container remove 846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:58:23 np0005591760 systemd[1]: libpod-conmon-846dc6965976b2d410853c0e95d01022cf2d7d1787edc83eca6187f655e13ba2.scope: Deactivated successfully.
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.329711399 +0000 UTC m=+0.040889863 container create 5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_goodall, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:58:23 np0005591760 systemd[1]: Started libpod-conmon-5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8.scope.
Jan 22 04:58:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:58:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d66e66a4e9bd8820d53f0e6ffa6b6f31b8e03e244a684b3157d075851030d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d66e66a4e9bd8820d53f0e6ffa6b6f31b8e03e244a684b3157d075851030d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d66e66a4e9bd8820d53f0e6ffa6b6f31b8e03e244a684b3157d075851030d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d66e66a4e9bd8820d53f0e6ffa6b6f31b8e03e244a684b3157d075851030d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.397223695 +0000 UTC m=+0.108402159 container init 5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.404349644 +0000 UTC m=+0.115528098 container start 5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.405552281 +0000 UTC m=+0.116730736 container attach 5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 04:58:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.314653522 +0000 UTC m=+0.025831996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]: {
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:    "0": [
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:        {
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "devices": [
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "/dev/loop3"
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            ],
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "lv_name": "ceph_lv0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "lv_size": "21470642176",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "name": "ceph_lv0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "tags": {
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.cluster_name": "ceph",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.crush_device_class": "",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.encrypted": "0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.osd_id": "0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.type": "block",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.vdo": "0",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:                "ceph.with_tpm": "0"
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            },
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "type": "block",
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:            "vg_name": "ceph_vg0"
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:        }
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]:    ]
Jan 22 04:58:23 np0005591760 heuristic_goodall[259908]: }
Jan 22 04:58:23 np0005591760 systemd[1]: libpod-5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8.scope: Deactivated successfully.
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.680201846 +0000 UTC m=+0.391380310 container died 5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 04:58:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5d66e66a4e9bd8820d53f0e6ffa6b6f31b8e03e244a684b3157d075851030d13-merged.mount: Deactivated successfully.
Jan 22 04:58:23 np0005591760 podman[259895]: 2026-01-22 09:58:23.705642352 +0000 UTC m=+0.416820816 container remove 5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:58:23 np0005591760 systemd[1]: libpod-conmon-5d90f6b4612b502852dec6194a6e32e98e95743eaa4e2d18eb3f5afdde7eeff8.scope: Deactivated successfully.
Jan 22 04:58:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.221107302 +0000 UTC m=+0.069988257 container create 748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 04:58:24 np0005591760 systemd[1]: Started libpod-conmon-748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5.scope.
Jan 22 04:58:24 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:58:24.251 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:58:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.281354848 +0000 UTC m=+0.130235813 container init 748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhabha, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.286860391 +0000 UTC m=+0.135741346 container start 748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:58:24 np0005591760 jovial_bhabha[260021]: 167 167
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.290638325 +0000 UTC m=+0.139519301 container attach 748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 04:58:24 np0005591760 systemd[1]: libpod-748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5.scope: Deactivated successfully.
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.200551568 +0000 UTC m=+0.049432533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.292290452 +0000 UTC m=+0.141171407 container died 748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:58:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-daed17e3b4e4cda426ffc88920e01fae54bbfc39396baee419335ba3ad2c0071-merged.mount: Deactivated successfully.
Jan 22 04:58:24 np0005591760 podman[260008]: 2026-01-22 09:58:24.314365141 +0000 UTC m=+0.163246096 container remove 748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:58:24 np0005591760 systemd[1]: libpod-conmon-748eda2c2094d7eac91a95f00f64b8ec9c593be20fa11c0c966c64039ae079a5.scope: Deactivated successfully.
Jan 22 04:58:24 np0005591760 podman[260043]: 2026-01-22 09:58:24.452326162 +0000 UTC m=+0.030941020 container create d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 04:58:24 np0005591760 systemd[1]: Started libpod-conmon-d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245.scope.
Jan 22 04:58:24 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:58:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba37564d32ba2b58c795aa8f2275cfa16d1887f61b3bbbaaa4c09ed2a397ac10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba37564d32ba2b58c795aa8f2275cfa16d1887f61b3bbbaaa4c09ed2a397ac10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba37564d32ba2b58c795aa8f2275cfa16d1887f61b3bbbaaa4c09ed2a397ac10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:24 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba37564d32ba2b58c795aa8f2275cfa16d1887f61b3bbbaaa4c09ed2a397ac10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:58:24 np0005591760 podman[260043]: 2026-01-22 09:58:24.531328347 +0000 UTC m=+0.109943226 container init d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_jackson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:58:24 np0005591760 podman[260043]: 2026-01-22 09:58:24.440540837 +0000 UTC m=+0.019155715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:58:24 np0005591760 podman[260043]: 2026-01-22 09:58:24.537762039 +0000 UTC m=+0.116376898 container start d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 04:58:24 np0005591760 podman[260043]: 2026-01-22 09:58:24.53927332 +0000 UTC m=+0.117888178 container attach d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 04:58:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:24.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:25 np0005591760 sleepy_jackson[260057]: {}
Jan 22 04:58:25 np0005591760 lvm[260135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:58:25 np0005591760 lvm[260135]: VG ceph_vg0 finished
Jan 22 04:58:25 np0005591760 systemd[1]: libpod-d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245.scope: Deactivated successfully.
Jan 22 04:58:25 np0005591760 systemd[1]: libpod-d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245.scope: Consumed 1.001s CPU time.
Jan 22 04:58:25 np0005591760 podman[260043]: 2026-01-22 09:58:25.177599686 +0000 UTC m=+0.756214545 container died d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_jackson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 04:58:25 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ba37564d32ba2b58c795aa8f2275cfa16d1887f61b3bbbaaa4c09ed2a397ac10-merged.mount: Deactivated successfully.
Jan 22 04:58:25 np0005591760 podman[260043]: 2026-01-22 09:58:25.208615387 +0000 UTC m=+0.787230245 container remove d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_jackson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:58:25 np0005591760 systemd[1]: libpod-conmon-d5713bef927c4723b11c206f0090540cac5a666e54b262c5bd3c55a25e0aa245.scope: Deactivated successfully.
Jan 22 04:58:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:58:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:58:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:25 np0005591760 nova_compute[248045]: 2026-01-22 09:58:25.389 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 103 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 112 op/s
Jan 22 04:58:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:26.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:58:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:27.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:27.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:27.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:27.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 119 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Jan 22 04:58:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:27] "GET /metrics HTTP/1.1" 200 48608 "" "Prometheus/2.51.0"
Jan 22 04:58:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:27] "GET /metrics HTTP/1.1" 200 48608 "" "Prometheus/2.51.0"
Jan 22 04:58:27 np0005591760 nova_compute[248045]: 2026-01-22 09:58:27.847 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.475090) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075908475199, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 465, "num_deletes": 251, "total_data_size": 501319, "memory_usage": 509568, "flush_reason": "Manual Compaction"}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075908478486, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 463721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23776, "largest_seqno": 24240, "table_properties": {"data_size": 461006, "index_size": 751, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7128, "raw_average_key_size": 20, "raw_value_size": 455457, "raw_average_value_size": 1305, "num_data_blocks": 30, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075893, "oldest_key_time": 1769075893, "file_creation_time": 1769075908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3415 microseconds, and 2606 cpu microseconds.
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.478521) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 463721 bytes OK
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.478539) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.478918) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.478930) EVENT_LOG_v1 {"time_micros": 1769075908478927, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.478956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 498521, prev total WAL file size 498521, number of live WAL files 2.
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.479372) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(452KB)], [50(16MB)]
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075908479417, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 17513546, "oldest_snapshot_seqno": -1}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5626 keys, 13416505 bytes, temperature: kUnknown
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075908514281, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13416505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13379495, "index_size": 21860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 143224, "raw_average_key_size": 25, "raw_value_size": 13278195, "raw_average_value_size": 2360, "num_data_blocks": 890, "num_entries": 5626, "num_filter_entries": 5626, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.514443) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13416505 bytes
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.514921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 501.8 rd, 384.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 16.3 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(66.7) write-amplify(28.9) OK, records in: 6143, records dropped: 517 output_compression: NoCompression
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.514942) EVENT_LOG_v1 {"time_micros": 1769075908514929, "job": 26, "event": "compaction_finished", "compaction_time_micros": 34900, "compaction_time_cpu_micros": 30732, "output_level": 6, "num_output_files": 1, "total_output_size": 13416505, "num_input_records": 6143, "num_output_records": 5626, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075908515086, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075908517581, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.479289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.517659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.517661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.517663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.517664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:28 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:58:28.517665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:58:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:28.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:28.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:28.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:28.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:28.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 119 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Jan 22 04:58:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:58:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:30.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:58:30 np0005591760 nova_compute[248045]: 2026-01-22 09:58:30.278 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:58:30 np0005591760 nova_compute[248045]: 2026-01-22 09:58:30.394 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:30.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 130 op/s
Jan 22 04:58:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:32.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:32.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:32 np0005591760 nova_compute[248045]: 2026-01-22 09:58:32.850 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 22 04:58:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:34.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:35 np0005591760 nova_compute[248045]: 2026-01-22 09:58:35.395 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 22 04:58:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:36.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:36.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:37.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:37.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:37.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:37.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 908 KiB/s wr, 29 op/s
Jan 22 04:58:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:37] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 04:58:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:37] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 04:58:37 np0005591760 nova_compute[248045]: 2026-01-22 09:58:37.851 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000021s ======
Jan 22 04:58:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:38.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Jan 22 04:58:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:38.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:38.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:38.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:38.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 25 KiB/s wr, 12 op/s
Jan 22 04:58:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:40 np0005591760 podman[260211]: 2026-01-22 09:58:40.062206547 +0000 UTC m=+0.046024546 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 04:58:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:40.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:40 np0005591760 nova_compute[248045]: 2026-01-22 09:58:40.398 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 32 KiB/s wr, 40 op/s
Jan 22 04:58:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:42.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:42.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:42 np0005591760 nova_compute[248045]: 2026-01-22 09:58:42.853 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 29 op/s
Jan 22 04:58:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:44.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:44.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:45 np0005591760 podman[260232]: 2026-01-22 09:58:45.123468076 +0000 UTC m=+0.098005142 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 22 04:58:45 np0005591760 nova_compute[248045]: 2026-01-22 09:58:45.400 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 22 04:58:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:46.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:46.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:47.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:47.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:47.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:47.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:58:47.317 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:58:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:58:47.317 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:58:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:58:47.318 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:58:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 22 04:58:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:47] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 04:58:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:47] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 04:58:47 np0005591760 nova_compute[248045]: 2026-01-22 09:58:47.854 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:48.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:48.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:48.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:48.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:48.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:48.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:58:49
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'vms', '.nfs', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:58:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:58:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:50 np0005591760 nova_compute[248045]: 2026-01-22 09:58:50.401 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:50.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 04:58:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611073387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 04:58:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 04:58:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611073387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 04:58:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Jan 22 04:58:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:58:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:58:52 np0005591760 nova_compute[248045]: 2026-01-22 09:58:52.855 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 04:58:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:54.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:58:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:58:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:58:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:58:55 np0005591760 nova_compute[248045]: 2026-01-22 09:58:55.403 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 22 04:58:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000020s ======
Jan 22 04:58:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:56.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Jan 22 04:58:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:58:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:56.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:58:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:57.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:57.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 22 04:58:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:57] "GET /metrics HTTP/1.1" 200 48584 "" "Prometheus/2.51.0"
Jan 22 04:58:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:58:57] "GET /metrics HTTP/1.1" 200 48584 "" "Prometheus/2.51.0"
Jan 22 04:58:57 np0005591760 nova_compute[248045]: 2026-01-22 09:58:57.857 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:58:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:58:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:58:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:58:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:58:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:58:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:58:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:58.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:58.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:58.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:58:58.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:58:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:59:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:58:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:00 np0005591760 nova_compute[248045]: 2026-01-22 09:59:00.404 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:00.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 22 04:59:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:02 np0005591760 nova_compute[248045]: 2026-01-22 09:59:02.859 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 04:59:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:04.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:05 np0005591760 nova_compute[248045]: 2026-01-22 09:59:05.405 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 22 04:59:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:06.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:07.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:07.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 22 04:59:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:07] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:59:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:07] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:59:07 np0005591760 nova_compute[248045]: 2026-01-22 09:59:07.861 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:59:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:08.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:08.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.697207) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075948697235, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 588, "num_deletes": 251, "total_data_size": 720680, "memory_usage": 732984, "flush_reason": "Manual Compaction"}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075948700359, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 710040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24241, "largest_seqno": 24828, "table_properties": {"data_size": 706989, "index_size": 1023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7319, "raw_average_key_size": 19, "raw_value_size": 700803, "raw_average_value_size": 1825, "num_data_blocks": 46, "num_entries": 384, "num_filter_entries": 384, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075909, "oldest_key_time": 1769075909, "file_creation_time": 1769075948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3176 microseconds, and 2342 cpu microseconds.
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.700385) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 710040 bytes OK
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.700401) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.700751) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.700761) EVENT_LOG_v1 {"time_micros": 1769075948700758, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.700772) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 717513, prev total WAL file size 717513, number of live WAL files 2.
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.701101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(693KB)], [53(12MB)]
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075948701134, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14126545, "oldest_snapshot_seqno": -1}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5499 keys, 12051639 bytes, temperature: kUnknown
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075948731577, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12051639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12016617, "index_size": 20192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 141265, "raw_average_key_size": 25, "raw_value_size": 11918684, "raw_average_value_size": 2167, "num_data_blocks": 815, "num_entries": 5499, "num_filter_entries": 5499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769075948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.731749) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12051639 bytes
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.732187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 463.3 rd, 395.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(36.9) write-amplify(17.0) OK, records in: 6010, records dropped: 511 output_compression: NoCompression
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.732202) EVENT_LOG_v1 {"time_micros": 1769075948732194, "job": 28, "event": "compaction_finished", "compaction_time_micros": 30491, "compaction_time_cpu_micros": 18311, "output_level": 6, "num_output_files": 1, "total_output_size": 12051639, "num_input_records": 6010, "num_output_records": 5499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075948732360, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769075948734162, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.701046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.734237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.734241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.734243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.734244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:59:08 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-09:59:08.734245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 04:59:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 22 04:59:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:59:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:59:10 np0005591760 nova_compute[248045]: 2026-01-22 09:59:10.406 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:10.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:11 np0005591760 podman[260305]: 2026-01-22 09:59:11.051845422 +0000 UTC m=+0.040926942 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 04:59:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 22 04:59:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:12 np0005591760 nova_compute[248045]: 2026-01-22 09:59:12.863 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 22 04:59:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:14 np0005591760 nova_compute[248045]: 2026-01-22 09:59:14.309 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:15 np0005591760 nova_compute[248045]: 2026-01-22 09:59:15.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:15 np0005591760 nova_compute[248045]: 2026-01-22 09:59:15.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:15 np0005591760 nova_compute[248045]: 2026-01-22 09:59:15.408 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 22 04:59:16 np0005591760 podman[260351]: 2026-01-22 09:59:16.080668378 +0000 UTC m=+0.070622724 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 04:59:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:16.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:16 np0005591760 nova_compute[248045]: 2026-01-22 09:59:16.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:16 np0005591760 nova_compute[248045]: 2026-01-22 09:59:16.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 04:59:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:16.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:17.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:17.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:17.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:17.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.316 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.317 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:59:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 22 04:59:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:17] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:59:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:17] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 04:59:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:59:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536144470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.667 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.866 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.886 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.887 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4605MB free_disk=59.92186737060547GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.888 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.888 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.929 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.929 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 04:59:17 np0005591760 nova_compute[248045]: 2026-01-22 09:59:17.943 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 04:59:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:18.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:59:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3212397879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:59:18 np0005591760 nova_compute[248045]: 2026-01-22 09:59:18.305 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 04:59:18 np0005591760 nova_compute[248045]: 2026-01-22 09:59:18.310 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 04:59:18 np0005591760 nova_compute[248045]: 2026-01-22 09:59:18.322 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 04:59:18 np0005591760 nova_compute[248045]: 2026-01-22 09:59:18.323 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 04:59:18 np0005591760 nova_compute[248045]: 2026-01-22 09:59:18.324 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:59:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:18.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:18.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:18.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:18.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:59:19 np0005591760 nova_compute[248045]: 2026-01-22 09:59:19.324 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:19 np0005591760 nova_compute[248045]: 2026-01-22 09:59:19.324 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 04:59:19 np0005591760 nova_compute[248045]: 2026-01-22 09:59:19.325 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 04:59:19 np0005591760 nova_compute[248045]: 2026-01-22 09:59:19.336 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 04:59:19 np0005591760 nova_compute[248045]: 2026-01-22 09:59:19.336 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:19 np0005591760 nova_compute[248045]: 2026-01-22 09:59:19.336 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:59:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 22 04:59:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:20.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:20 np0005591760 nova_compute[248045]: 2026-01-22 09:59:20.411 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:20.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 188 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 200 op/s
Jan 22 04:59:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:22.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:22.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 04:59:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2511576059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 04:59:22 np0005591760 nova_compute[248045]: 2026-01-22 09:59:22.868 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 188 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 22 04:59:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:59:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:24.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:59:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:24.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:25 np0005591760 nova_compute[248045]: 2026-01-22 09:59:25.413 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 22 04:59:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 232 KiB/s rd, 2.4 MiB/s wr, 68 op/s
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 04:59:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:26.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.478375117 +0000 UTC m=+0.030036455 container create 53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_rubin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:59:26 np0005591760 systemd[1]: Started libpod-conmon-53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1.scope.
Jan 22 04:59:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.543615962 +0000 UTC m=+0.095277299 container init 53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_rubin, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.548718273 +0000 UTC m=+0.100379600 container start 53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_rubin, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.549959043 +0000 UTC m=+0.101620369 container attach 53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_rubin, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 04:59:26 np0005591760 jolly_rubin[260602]: 167 167
Jan 22 04:59:26 np0005591760 systemd[1]: libpod-53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1.scope: Deactivated successfully.
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.553272621 +0000 UTC m=+0.104933949 container died 53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.466671938 +0000 UTC m=+0.018333325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:59:26 np0005591760 systemd[1]: var-lib-containers-storage-overlay-457b204cc62aa70e5ac7f90722621aa2fb7dceba73389ad60edf98e65445d21d-merged.mount: Deactivated successfully.
Jan 22 04:59:26 np0005591760 podman[260589]: 2026-01-22 09:59:26.572247896 +0000 UTC m=+0.123909223 container remove 53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_rubin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 04:59:26 np0005591760 systemd[1]: libpod-conmon-53933b0cbac027f21ca2077b503c31dbbc4449fbb58060b7888fbeff705e6fc1.scope: Deactivated successfully.
Jan 22 04:59:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 04:59:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:26.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 04:59:26 np0005591760 podman[260625]: 2026-01-22 09:59:26.704967633 +0000 UTC m=+0.034335973 container create 7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heyrovsky, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 04:59:26 np0005591760 systemd[1]: Started libpod-conmon-7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b.scope.
Jan 22 04:59:26 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:59:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7d68f4afc9547828ddf033b2053f84c399244af575c557da1da6b81669ca62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7d68f4afc9547828ddf033b2053f84c399244af575c557da1da6b81669ca62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7d68f4afc9547828ddf033b2053f84c399244af575c557da1da6b81669ca62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7d68f4afc9547828ddf033b2053f84c399244af575c557da1da6b81669ca62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:26 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7d68f4afc9547828ddf033b2053f84c399244af575c557da1da6b81669ca62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:26 np0005591760 podman[260625]: 2026-01-22 09:59:26.769829902 +0000 UTC m=+0.099198243 container init 7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heyrovsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 04:59:26 np0005591760 podman[260625]: 2026-01-22 09:59:26.77686506 +0000 UTC m=+0.106233390 container start 7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:59:26 np0005591760 podman[260625]: 2026-01-22 09:59:26.77837112 +0000 UTC m=+0.107739470 container attach 7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 04:59:26 np0005591760 podman[260625]: 2026-01-22 09:59:26.692251984 +0000 UTC m=+0.021620344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:26 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 04:59:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:27.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:27 np0005591760 reverent_heyrovsky[260638]: --> passed data devices: 0 physical, 1 LVM
Jan 22 04:59:27 np0005591760 reverent_heyrovsky[260638]: --> All data devices are unavailable
Jan 22 04:59:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:27.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:27.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:27.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:27 np0005591760 systemd[1]: libpod-7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b.scope: Deactivated successfully.
Jan 22 04:59:27 np0005591760 podman[260625]: 2026-01-22 09:59:27.106896985 +0000 UTC m=+0.436265325 container died 7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:59:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-bd7d68f4afc9547828ddf033b2053f84c399244af575c557da1da6b81669ca62-merged.mount: Deactivated successfully.
Jan 22 04:59:27 np0005591760 podman[260625]: 2026-01-22 09:59:27.140695272 +0000 UTC m=+0.470063612 container remove 7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 04:59:27 np0005591760 systemd[1]: libpod-conmon-7eb05975454c4a5fe6a6039bd47dc7eab4c323ed6d0f44f96b5596c1b3ecd85b.scope: Deactivated successfully.
Jan 22 04:59:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:27] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:59:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:27] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.638430056 +0000 UTC m=+0.036702335 container create e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 04:59:27 np0005591760 systemd[1]: Started libpod-conmon-e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de.scope.
Jan 22 04:59:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.704549907 +0000 UTC m=+0.102822186 container init e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.711693609 +0000 UTC m=+0.109965889 container start e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khorana, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.713292986 +0000 UTC m=+0.111565285 container attach e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Jan 22 04:59:27 np0005591760 admiring_khorana[260759]: 167 167
Jan 22 04:59:27 np0005591760 systemd[1]: libpod-e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de.scope: Deactivated successfully.
Jan 22 04:59:27 np0005591760 conmon[260759]: conmon e1faae92e7d650d7a631 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de.scope/container/memory.events
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.716833031 +0000 UTC m=+0.115105310 container died e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.62470341 +0000 UTC m=+0.022975709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:59:27 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ba6c3610678383fcecd1849b1bfeee114098c8e9283a8fe92a38296079572fc4-merged.mount: Deactivated successfully.
Jan 22 04:59:27 np0005591760 podman[260746]: 2026-01-22 09:59:27.739970785 +0000 UTC m=+0.138243064 container remove e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:59:27 np0005591760 systemd[1]: libpod-conmon-e1faae92e7d650d7a631785a366035f4c8538d6a83df1c6b571478cd378df0de.scope: Deactivated successfully.
Jan 22 04:59:27 np0005591760 nova_compute[248045]: 2026-01-22 09:59:27.870 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:27 np0005591760 podman[260781]: 2026-01-22 09:59:27.880218409 +0000 UTC m=+0.036199408 container create de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_feistel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:59:27 np0005591760 systemd[1]: Started libpod-conmon-de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9.scope.
Jan 22 04:59:27 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:59:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf751848b5effeb2ffdd845574b9ad803c16a594fb5a9e1538a825fa6d6029a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf751848b5effeb2ffdd845574b9ad803c16a594fb5a9e1538a825fa6d6029a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf751848b5effeb2ffdd845574b9ad803c16a594fb5a9e1538a825fa6d6029a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:27 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf751848b5effeb2ffdd845574b9ad803c16a594fb5a9e1538a825fa6d6029a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:27 np0005591760 podman[260781]: 2026-01-22 09:59:27.949147137 +0000 UTC m=+0.105128146 container init de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_feistel, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 04:59:27 np0005591760 podman[260781]: 2026-01-22 09:59:27.954527693 +0000 UTC m=+0.110508692 container start de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:59:27 np0005591760 podman[260781]: 2026-01-22 09:59:27.956118594 +0000 UTC m=+0.112099593 container attach de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_feistel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 04:59:27 np0005591760 podman[260781]: 2026-01-22 09:59:27.865150783 +0000 UTC m=+0.021131773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 232 KiB/s rd, 2.4 MiB/s wr, 68 op/s
Jan 22 04:59:28 np0005591760 angry_feistel[260794]: {
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:    "0": [
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:        {
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "devices": [
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "/dev/loop3"
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            ],
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "lv_name": "ceph_lv0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "lv_size": "21470642176",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "name": "ceph_lv0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "tags": {
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.cephx_lockbox_secret": "",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.cluster_name": "ceph",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.crush_device_class": "",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.encrypted": "0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.osd_id": "0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.type": "block",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.vdo": "0",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:                "ceph.with_tpm": "0"
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            },
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "type": "block",
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:            "vg_name": "ceph_vg0"
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:        }
Jan 22 04:59:28 np0005591760 angry_feistel[260794]:    ]
Jan 22 04:59:28 np0005591760 angry_feistel[260794]: }
Jan 22 04:59:28 np0005591760 systemd[1]: libpod-de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9.scope: Deactivated successfully.
Jan 22 04:59:28 np0005591760 podman[260781]: 2026-01-22 09:59:28.202524875 +0000 UTC m=+0.358505874 container died de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 04:59:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-cf751848b5effeb2ffdd845574b9ad803c16a594fb5a9e1538a825fa6d6029a1-merged.mount: Deactivated successfully.
Jan 22 04:59:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:28 np0005591760 podman[260781]: 2026-01-22 09:59:28.232497418 +0000 UTC m=+0.388478407 container remove de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 04:59:28 np0005591760 systemd[1]: libpod-conmon-de7433d8172d8f27a8c0e50419cbaf67628191976c9f8c58904c29febad0a5f9.scope: Deactivated successfully.
Jan 22 04:59:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:28.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.717773206 +0000 UTC m=+0.031143162 container create c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_goldberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 04:59:28 np0005591760 systemd[1]: Started libpod-conmon-c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b.scope.
Jan 22 04:59:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.798971803 +0000 UTC m=+0.112341769 container init c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_goldberg, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.703759639 +0000 UTC m=+0.017129615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.804904922 +0000 UTC m=+0.118274878 container start c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_goldberg, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.806482127 +0000 UTC m=+0.119852103 container attach c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 22 04:59:28 np0005591760 lucid_goldberg[260910]: 167 167
Jan 22 04:59:28 np0005591760 systemd[1]: libpod-c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b.scope: Deactivated successfully.
Jan 22 04:59:28 np0005591760 conmon[260910]: conmon c4993116a41bab2e26b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b.scope/container/memory.events
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.81229543 +0000 UTC m=+0.125665385 container died c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 04:59:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-99c24f627fdca25e8020add641afaecbb65c037758ea0ef60afe5f55972d94a0-merged.mount: Deactivated successfully.
Jan 22 04:59:28 np0005591760 podman[260897]: 2026-01-22 09:59:28.83472612 +0000 UTC m=+0.148096066 container remove c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:59:28 np0005591760 systemd[1]: libpod-conmon-c4993116a41bab2e26b65dd31d766dcc696da69d1b6db4134dc784350cc9ad3b.scope: Deactivated successfully.
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:28.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:28.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:28.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:28.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:28 np0005591760 podman[260932]: 2026-01-22 09:59:28.970800483 +0000 UTC m=+0.039485344 container create 591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 04:59:29 np0005591760 systemd[1]: Started libpod-conmon-591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399.scope.
Jan 22 04:59:29 np0005591760 systemd[1]: Started libcrun container.
Jan 22 04:59:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028690fabc50e5be3de02b02e8eb564c666087d86693c9f97c93dead84570026/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028690fabc50e5be3de02b02e8eb564c666087d86693c9f97c93dead84570026/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028690fabc50e5be3de02b02e8eb564c666087d86693c9f97c93dead84570026/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028690fabc50e5be3de02b02e8eb564c666087d86693c9f97c93dead84570026/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 04:59:29 np0005591760 podman[260932]: 2026-01-22 09:59:29.031608668 +0000 UTC m=+0.100293539 container init 591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:59:29 np0005591760 podman[260932]: 2026-01-22 09:59:29.03681747 +0000 UTC m=+0.105502322 container start 591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 04:59:29 np0005591760 podman[260932]: 2026-01-22 09:59:29.038013376 +0000 UTC m=+0.106698247 container attach 591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_poincare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 04:59:29 np0005591760 podman[260932]: 2026-01-22 09:59:28.955305252 +0000 UTC m=+0.023990123 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 04:59:29 np0005591760 lvm[261021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 04:59:29 np0005591760 lvm[261021]: VG ceph_vg0 finished
Jan 22 04:59:29 np0005591760 strange_poincare[260945]: {}
Jan 22 04:59:29 np0005591760 systemd[1]: libpod-591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399.scope: Deactivated successfully.
Jan 22 04:59:29 np0005591760 systemd[1]: libpod-591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399.scope: Consumed 1.016s CPU time.
Jan 22 04:59:29 np0005591760 podman[260932]: 2026-01-22 09:59:29.65966665 +0000 UTC m=+0.728351501 container died 591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 04:59:29 np0005591760 systemd[1]: var-lib-containers-storage-overlay-028690fabc50e5be3de02b02e8eb564c666087d86693c9f97c93dead84570026-merged.mount: Deactivated successfully.
Jan 22 04:59:29 np0005591760 podman[260932]: 2026-01-22 09:59:29.684879397 +0000 UTC m=+0.753564248 container remove 591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 04:59:29 np0005591760 systemd[1]: libpod-conmon-591d38da0924399083c704e0cf94b8a65912181b123e9cfdfa9fe34771f0b399.scope: Deactivated successfully.
Jan 22 04:59:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 04:59:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:29 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 04:59:29 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:29 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 04:59:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 232 KiB/s rd, 2.4 MiB/s wr, 68 op/s
Jan 22 04:59:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:30 np0005591760 nova_compute[248045]: 2026-01-22 09:59:30.415 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:30.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 124 KiB/s wr, 24 op/s
Jan 22 04:59:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:32.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:32 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:59:32.257 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 04:59:32 np0005591760 nova_compute[248045]: 2026-01-22 09:59:32.258 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:32 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:59:32.258 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 04:59:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:32.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:32 np0005591760 nova_compute[248045]: 2026-01-22 09:59:32.872 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 124 KiB/s wr, 24 op/s
Jan 22 04:59:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:34.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:34.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:35 np0005591760 nova_compute[248045]: 2026-01-22 09:59:35.416 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 15 KiB/s wr, 2 op/s
Jan 22 04:59:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:36.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:36 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:59:36.260 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 04:59:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:36.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:37] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:59:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:37] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:59:37 np0005591760 nova_compute[248045]: 2026-01-22 09:59:37.873 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 13 KiB/s wr, 1 op/s
Jan 22 04:59:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:38.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:38.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:38.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:38.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:38.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:38.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 200 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 13 KiB/s wr, 1 op/s
Jan 22 04:59:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:40 np0005591760 nova_compute[248045]: 2026-01-22 09:59:40.417 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:40.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:42 np0005591760 podman[261096]: 2026-01-22 09:59:42.052305306 +0000 UTC m=+0.043514021 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 04:59:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 22 04:59:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:42.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:42.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:42 np0005591760 nova_compute[248045]: 2026-01-22 09:59:42.875 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Jan 22 04:59:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:44.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:44.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:45 np0005591760 nova_compute[248045]: 2026-01-22 09:59:45.419 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 6.7 KiB/s wr, 58 op/s
Jan 22 04:59:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:46.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:46.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:47.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:47 np0005591760 podman[261117]: 2026-01-22 09:59:47.073212534 +0000 UTC m=+0.062054094 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 04:59:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:47.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:47.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:47.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:59:47.318 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 04:59:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:59:47.318 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 04:59:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 09:59:47.319 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 04:59:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:47] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:59:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:47] "GET /metrics HTTP/1.1" 200 48606 "" "Prometheus/2.51.0"
Jan 22 04:59:47 np0005591760 nova_compute[248045]: 2026-01-22 09:59:47.878 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Jan 22 04:59:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:48.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:48.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:48.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:48.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:48.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:48.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_09:59:49
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', '.nfs', 'volumes', 'backups', 'default.rgw.control']
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 04:59:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 04:59:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Jan 22 04:59:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:50 np0005591760 nova_compute[248045]: 2026-01-22 09:59:50.422 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 04:59:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285911383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 04:59:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 04:59:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285911383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 04:59:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Jan 22 04:59:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:52.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:52 np0005591760 nova_compute[248045]: 2026-01-22 09:59:52.879 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 22 04:59:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:54.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:54.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 04:59:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 04:59:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 04:59:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 04:59:55 np0005591760 nova_compute[248045]: 2026-01-22 09:59:55.423 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 04:59:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:56.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:57.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:57.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:57.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:57] "GET /metrics HTTP/1.1" 200 48589 "" "Prometheus/2.51.0"
Jan 22 04:59:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:09:59:57] "GET /metrics HTTP/1.1" 200 48589 "" "Prometheus/2.51.0"
Jan 22 04:59:57 np0005591760 nova_compute[248045]: 2026-01-22 09:59:57.881 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 04:59:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 04:59:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:09:59:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 04:59:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 04:59:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 04:59:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:09:59:58.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 04:59:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:58.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:58.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T09:59:58.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 04:59:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:00:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Jan 22 05:00:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 09:59:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:00 np0005591760 ceph-mon[74254]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Jan 22 05:00:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:00 np0005591760 nova_compute[248045]: 2026-01-22 10:00:00.425 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:00.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 22 05:00:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:02.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:02.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:02 np0005591760 nova_compute[248045]: 2026-01-22 10:00:02.883 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 22 05:00:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:04.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:05 np0005591760 nova_compute[248045]: 2026-01-22 10:00:05.427 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 22 05:00:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:06.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:07.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:07.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:07.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:07.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 05:00:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:07] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 05:00:07 np0005591760 nova_compute[248045]: 2026-01-22 10:00:07.885 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 05:00:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:08.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:08.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:08.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:08.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 22 05:00:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:10 np0005591760 nova_compute[248045]: 2026-01-22 10:00:10.428 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:10.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 22 05:00:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:12.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:12 np0005591760 nova_compute[248045]: 2026-01-22 10:00:12.887 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:13 np0005591760 podman[261191]: 2026-01-22 10:00:13.052343763 +0000 UTC m=+0.044847857 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 05:00:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 22 05:00:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:00:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:00:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:14.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:15 np0005591760 nova_compute[248045]: 2026-01-22 10:00:15.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:15 np0005591760 nova_compute[248045]: 2026-01-22 10:00:15.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:15 np0005591760 nova_compute[248045]: 2026-01-22 10:00:15.312 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:15 np0005591760 nova_compute[248045]: 2026-01-22 10:00:15.312 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:15 np0005591760 nova_compute[248045]: 2026-01-22 10:00:15.429 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 22 05:00:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:16 np0005591760 nova_compute[248045]: 2026-01-22 10:00:16.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:16 np0005591760 nova_compute[248045]: 2026-01-22 10:00:16.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:00:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:16.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:17.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:17.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:17.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:17.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 05:00:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:17] "GET /metrics HTTP/1.1" 200 48605 "" "Prometheus/2.51.0"
Jan 22 05:00:17 np0005591760 nova_compute[248045]: 2026-01-22 10:00:17.890 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:18 np0005591760 podman[261237]: 2026-01-22 10:00:18.066352382 +0000 UTC m=+0.059175807 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:00:18 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 05:00:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.325 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.325 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.325 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.325 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.325 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:00:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.666 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:00:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:18.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.871 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.872 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4628MB free_disk=59.94289016723633GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.873 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.873 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:00:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:18.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:18.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:18.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:18.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.916 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.916 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:00:18 np0005591760 nova_compute[248045]: 2026-01-22 10:00:18.928 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:00:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:00:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597819202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:00:19 np0005591760 nova_compute[248045]: 2026-01-22 10:00:19.279 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:00:19 np0005591760 nova_compute[248045]: 2026-01-22 10:00:19.283 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:00:19 np0005591760 nova_compute[248045]: 2026-01-22 10:00:19.295 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:00:19 np0005591760 nova_compute[248045]: 2026-01-22 10:00:19.296 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:00:19 np0005591760 nova_compute[248045]: 2026-01-22 10:00:19.296 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:00:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:00:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:00:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:00:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:00:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:00:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:00:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:20 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 05:00:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.297 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.299 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.299 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.327 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.327 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.327 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:00:20 np0005591760 nova_compute[248045]: 2026-01-22 10:00:20.432 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:20 np0005591760 ceph-osd[82185]: bluestore.MempoolThread fragmentation_score=0.000400 took=0.000035s
Jan 22 05:00:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:20.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:22 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 05:00:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:22.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:22.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:22 np0005591760 nova_compute[248045]: 2026-01-22 10:00:22.892 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:24 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 05:00:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:25 np0005591760 nova_compute[248045]: 2026-01-22 10:00:25.433 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 05:00:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:27.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:27.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:27.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:27.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:27] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 05:00:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:27] "GET /metrics HTTP/1.1" 200 48609 "" "Prometheus/2.51.0"
Jan 22 05:00:27 np0005591760 nova_compute[248045]: 2026-01-22 10:00:27.893 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 1 op/s
Jan 22 05:00:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:28.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:28.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:29.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:29.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:29.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 121 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 1 op/s
Jan 22 05:00:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:30 np0005591760 nova_compute[248045]: 2026-01-22 10:00:30.435 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 20 KiB/s wr, 34 op/s
Jan 22 05:00:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:00:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:00:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:30.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.869397783 +0000 UTC m=+0.027647099 container create 258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:00:30 np0005591760 systemd[1]: Started libpod-conmon-258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7.scope.
Jan 22 05:00:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.925469315 +0000 UTC m=+0.083718632 container init 258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ritchie, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.930209015 +0000 UTC m=+0.088458321 container start 258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ritchie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.931503676 +0000 UTC m=+0.089752983 container attach 258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ritchie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Jan 22 05:00:30 np0005591760 musing_ritchie[261489]: 167 167
Jan 22 05:00:30 np0005591760 systemd[1]: libpod-258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7.scope: Deactivated successfully.
Jan 22 05:00:30 np0005591760 conmon[261489]: conmon 258b61c167b95d8c9817 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7.scope/container/memory.events
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.935400177 +0000 UTC m=+0.093649513 container died 258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:00:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a8d384c2e7eef1913950f4ff2090f092729b2fe113ecad9f3e66d0228ffcadfc-merged.mount: Deactivated successfully.
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.857256322 +0000 UTC m=+0.015505648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:00:30 np0005591760 podman[261475]: 2026-01-22 10:00:30.955434005 +0000 UTC m=+0.113683311 container remove 258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Jan 22 05:00:30 np0005591760 systemd[1]: libpod-conmon-258b61c167b95d8c9817f26f2acd1e7d1179932b992a3d635aa96e0cb43cb0b7.scope: Deactivated successfully.
Jan 22 05:00:31 np0005591760 podman[261512]: 2026-01-22 10:00:31.080732562 +0000 UTC m=+0.029566258 container create 2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:00:31 np0005591760 systemd[1]: Started libpod-conmon-2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db.scope.
Jan 22 05:00:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:00:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661be177815b270c139ccf838ed7bbea085cb54ef3faf79369fa5e61f13ae1cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661be177815b270c139ccf838ed7bbea085cb54ef3faf79369fa5e61f13ae1cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661be177815b270c139ccf838ed7bbea085cb54ef3faf79369fa5e61f13ae1cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661be177815b270c139ccf838ed7bbea085cb54ef3faf79369fa5e61f13ae1cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:31 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661be177815b270c139ccf838ed7bbea085cb54ef3faf79369fa5e61f13ae1cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:31 np0005591760 podman[261512]: 2026-01-22 10:00:31.132695485 +0000 UTC m=+0.081529181 container init 2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_jackson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:00:31 np0005591760 podman[261512]: 2026-01-22 10:00:31.138365769 +0000 UTC m=+0.087199456 container start 2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_jackson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:00:31 np0005591760 podman[261512]: 2026-01-22 10:00:31.140212884 +0000 UTC m=+0.089046570 container attach 2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 05:00:31 np0005591760 podman[261512]: 2026-01-22 10:00:31.069122653 +0000 UTC m=+0.017956350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:00:31 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:00:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:31 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:00:31 np0005591760 affectionate_jackson[261525]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:00:31 np0005591760 affectionate_jackson[261525]: --> All data devices are unavailable
Jan 22 05:00:31 np0005591760 systemd[1]: libpod-2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db.scope: Deactivated successfully.
Jan 22 05:00:31 np0005591760 podman[261540]: 2026-01-22 10:00:31.442311452 +0000 UTC m=+0.019563582 container died 2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_jackson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 05:00:31 np0005591760 systemd[1]: var-lib-containers-storage-overlay-661be177815b270c139ccf838ed7bbea085cb54ef3faf79369fa5e61f13ae1cd-merged.mount: Deactivated successfully.
Jan 22 05:00:31 np0005591760 podman[261540]: 2026-01-22 10:00:31.461366022 +0000 UTC m=+0.038618153 container remove 2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_jackson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 05:00:31 np0005591760 systemd[1]: libpod-conmon-2dfa74e49bdb5cd08da0b205ae5df62d55ac7e27a703d0571f2d4f525aaa83db.scope: Deactivated successfully.
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.887123696 +0000 UTC m=+0.026891543 container create 5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sammet, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:00:31 np0005591760 systemd[1]: Started libpod-conmon-5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da.scope.
Jan 22 05:00:31 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.942938212 +0000 UTC m=+0.082706080 container init 5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sammet, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.9477986 +0000 UTC m=+0.087566448 container start 5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sammet, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.948866575 +0000 UTC m=+0.088634423 container attach 5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:00:31 np0005591760 dreamy_sammet[261645]: 167 167
Jan 22 05:00:31 np0005591760 systemd[1]: libpod-5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da.scope: Deactivated successfully.
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.95097987 +0000 UTC m=+0.090747728 container died 5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sammet, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:00:31 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a797bf2eea1ae0e3e724f8fe056aac876a3cfd4f65a860d87fcdceea771ce7b8-merged.mount: Deactivated successfully.
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.972705269 +0000 UTC m=+0.112473117 container remove 5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 05:00:31 np0005591760 podman[261631]: 2026-01-22 10:00:31.876264052 +0000 UTC m=+0.016031920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:00:31 np0005591760 systemd[1]: libpod-conmon-5b6f3053fdd06a501bf1007c7cfc054f2f25833f47daac0e3664da9fd248e4da.scope: Deactivated successfully.
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.091565563 +0000 UTC m=+0.028571724 container create 743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:00:32 np0005591760 systemd[1]: Started libpod-conmon-743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1.scope.
Jan 22 05:00:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:00:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bb2b8f43d39a077349fc16e4ce653c1973d5dd376485a8caa983b753cc5b14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bb2b8f43d39a077349fc16e4ce653c1973d5dd376485a8caa983b753cc5b14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bb2b8f43d39a077349fc16e4ce653c1973d5dd376485a8caa983b753cc5b14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bb2b8f43d39a077349fc16e4ce653c1973d5dd376485a8caa983b753cc5b14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.143816799 +0000 UTC m=+0.080822969 container init 743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.148314112 +0000 UTC m=+0.085320271 container start 743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.149467968 +0000 UTC m=+0.086474127 container attach 743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.079543297 +0000 UTC m=+0.016549476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:00:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:32.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:32 np0005591760 exciting_napier[261680]: {
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:    "0": [
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:        {
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "devices": [
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "/dev/loop3"
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            ],
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "lv_name": "ceph_lv0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "lv_size": "21470642176",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "name": "ceph_lv0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "tags": {
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.cluster_name": "ceph",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.crush_device_class": "",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.encrypted": "0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.osd_id": "0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.type": "block",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.vdo": "0",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:                "ceph.with_tpm": "0"
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            },
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "type": "block",
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:            "vg_name": "ceph_vg0"
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:        }
Jan 22 05:00:32 np0005591760 exciting_napier[261680]:    ]
Jan 22 05:00:32 np0005591760 exciting_napier[261680]: }
Jan 22 05:00:32 np0005591760 systemd[1]: libpod-743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1.scope: Deactivated successfully.
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.384452696 +0000 UTC m=+0.321458856 container died 743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:00:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-02bb2b8f43d39a077349fc16e4ce653c1973d5dd376485a8caa983b753cc5b14-merged.mount: Deactivated successfully.
Jan 22 05:00:32 np0005591760 podman[261667]: 2026-01-22 10:00:32.406742609 +0000 UTC m=+0.343748769 container remove 743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_napier, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:00:32 np0005591760 systemd[1]: libpod-conmon-743bbe47ad244fc9452efe449caee8b76bbd1cc0f9055252f87e890cb2cd45f1.scope: Deactivated successfully.
Jan 22 05:00:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.7 KiB/s wr, 33 op/s
Jan 22 05:00:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.813236757 +0000 UTC m=+0.026617857 container create 742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 05:00:32 np0005591760 systemd[1]: Started libpod-conmon-742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1.scope.
Jan 22 05:00:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.867461365 +0000 UTC m=+0.080842476 container init 742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.871722884 +0000 UTC m=+0.085103983 container start 742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.872764668 +0000 UTC m=+0.086145788 container attach 742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lalande, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 05:00:32 np0005591760 boring_lalande[261793]: 167 167
Jan 22 05:00:32 np0005591760 systemd[1]: libpod-742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1.scope: Deactivated successfully.
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.875511159 +0000 UTC m=+0.088892248 container died 742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lalande, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:00:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-300b248c73174dc0578626e32551c80979c67955045d25d662595f4fd9164e61-merged.mount: Deactivated successfully.
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.892296639 +0000 UTC m=+0.105677739 container remove 742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lalande, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:00:32 np0005591760 podman[261781]: 2026-01-22 10:00:32.802625412 +0000 UTC m=+0.016006532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:00:32 np0005591760 nova_compute[248045]: 2026-01-22 10:00:32.895 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:32 np0005591760 systemd[1]: libpod-conmon-742a4913030fd1a07bec8052fd43b92b67189a3490a18c7155776858baa9f4a1.scope: Deactivated successfully.
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.012954303 +0000 UTC m=+0.029209115 container create 9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_yonath, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:00:33 np0005591760 systemd[1]: Started libpod-conmon-9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b.scope.
Jan 22 05:00:33 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:00:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a276466ca6da371ac4cb8f48724d21fec95b343c90e42dd65e66911535a1dfa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a276466ca6da371ac4cb8f48724d21fec95b343c90e42dd65e66911535a1dfa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a276466ca6da371ac4cb8f48724d21fec95b343c90e42dd65e66911535a1dfa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a276466ca6da371ac4cb8f48724d21fec95b343c90e42dd65e66911535a1dfa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.067467306 +0000 UTC m=+0.083722118 container init 9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_yonath, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.072255267 +0000 UTC m=+0.088510078 container start 9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.073347207 +0000 UTC m=+0.089602018 container attach 9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.001290443 +0000 UTC m=+0.017545274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:00:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:33 np0005591760 funny_yonath[261828]: {}
Jan 22 05:00:33 np0005591760 lvm[261930]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:00:33 np0005591760 lvm[261930]: VG ceph_vg0 finished
Jan 22 05:00:33 np0005591760 systemd[1]: libpod-9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b.scope: Deactivated successfully.
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.580390711 +0000 UTC m=+0.596645532 container died 9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_yonath, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 05:00:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a276466ca6da371ac4cb8f48724d21fec95b343c90e42dd65e66911535a1dfa1-merged.mount: Deactivated successfully.
Jan 22 05:00:33 np0005591760 podman[261815]: 2026-01-22 10:00:33.605763759 +0000 UTC m=+0.622018570 container remove 9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:00:33 np0005591760 systemd[1]: libpod-conmon-9fd14e020441a646c524dce0c173a7c65edafc20683926aa483f9ecccc586b8b.scope: Deactivated successfully.
Jan 22 05:00:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:00:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:00:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:34 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:00:34.185 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 05:00:34 np0005591760 nova_compute[248045]: 2026-01-22 10:00:34.186 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:34 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:00:34.186 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 05:00:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:34.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.7 KiB/s wr, 33 op/s
Jan 22 05:00:34 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:34 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:00:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:35 np0005591760 nova_compute[248045]: 2026-01-22 10:00:35.437 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:36.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 33 op/s
Jan 22 05:00:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:37.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:37.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:00:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:00:37 np0005591760 nova_compute[248045]: 2026-01-22 10:00:37.897 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:38.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 33 op/s
Jan 22 05:00:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:38.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:38.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:38.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:38.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:40 np0005591760 nova_compute[248045]: 2026-01-22 10:00:40.438 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 33 op/s
Jan 22 05:00:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:40.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:41 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:00:41.187 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 05:00:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:42.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:42 np0005591760 nova_compute[248045]: 2026-01-22 10:00:42.899 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:44 np0005591760 podman[261978]: 2026-01-22 10:00:44.056723071 +0000 UTC m=+0.041178571 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 05:00:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:44.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:44.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:45 np0005591760 nova_compute[248045]: 2026-01-22 10:00:45.439 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:46.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:00:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:00:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:46.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:00:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:47.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:47.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:47.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:47.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:00:47.319 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:00:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:00:47.320 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:00:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:00:47.320 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:00:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:00:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:00:47 np0005591760 nova_compute[248045]: 2026-01-22 10:00:47.902 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:48.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:48.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:49 np0005591760 podman[262000]: 2026-01-22 10:00:49.084426805 +0000 UTC m=+0.062711578 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:00:49
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'images', 'volumes', 'vms', 'default.rgw.log', '.nfs']
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:00:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:00:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:50.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:50 np0005591760 nova_compute[248045]: 2026-01-22 10:00:50.442 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:00:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:52.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:52 np0005591760 nova_compute[248045]: 2026-01-22 10:00:52.904 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:54.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:00:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:00:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:00:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:00:55 np0005591760 nova_compute[248045]: 2026-01-22 10:00:55.443 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:56.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:00:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:56.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:57.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:57.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:57.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:00:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:00:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:00:57 np0005591760 nova_compute[248045]: 2026-01-22 10:00:57.907 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:00:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:00:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:00:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:00:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:00:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:00:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:00:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:00:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:00:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:00:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:58.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:58.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:58.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:00:58.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:00:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:00:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:00 np0005591760 nova_compute[248045]: 2026-01-22 10:01:00.446 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:01:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:01:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:02 np0005591760 nova_compute[248045]: 2026-01-22 10:01:02.909 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:04.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:05 np0005591760 nova_compute[248045]: 2026-01-22 10:01:05.447 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:06.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:07.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:07.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:07.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:07.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:07] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:01:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:07] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:01:07 np0005591760 nova_compute[248045]: 2026-01-22 10:01:07.912 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:08.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:10 np0005591760 nova_compute[248045]: 2026-01-22 10:01:10.449 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:12 np0005591760 nova_compute[248045]: 2026-01-22 10:01:12.915 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:01:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:14.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:01:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:14.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:15 np0005591760 podman[262110]: 2026-01-22 10:01:15.076425156 +0000 UTC m=+0.051566596 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:01:15 np0005591760 nova_compute[248045]: 2026-01-22 10:01:15.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:15 np0005591760 nova_compute[248045]: 2026-01-22 10:01:15.450 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:16 np0005591760 nova_compute[248045]: 2026-01-22 10:01:16.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:16 np0005591760 nova_compute[248045]: 2026-01-22 10:01:16.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:01:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:16.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:01:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:01:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:17.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:17.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:17.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:17.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:17 np0005591760 nova_compute[248045]: 2026-01-22 10:01:17.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:17 np0005591760 nova_compute[248045]: 2026-01-22 10:01:17.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:17] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:01:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:17] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:01:17 np0005591760 nova_compute[248045]: 2026-01-22 10:01:17.918 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:18 np0005591760 systemd-logind[747]: New session 55 of user zuul.
Jan 22 05:01:18 np0005591760 systemd[1]: Started Session 55 of User zuul.
Jan 22 05:01:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:18.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:18 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:01:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:18.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:01:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:18.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:18.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:18.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:18.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.326 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.326 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.326 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.327 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.327 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:01:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:01:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:01:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:01:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:01:19 np0005591760 podman[262168]: 2026-01-22 10:01:19.360558597 +0000 UTC m=+0.086138775 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 05:01:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:01:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:01:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:01:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343209049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.688 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.957 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.959 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.960 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:01:19 np0005591760 nova_compute[248045]: 2026-01-22 10:01:19.960 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:01:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.059 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.059 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.194 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:01:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:20.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:20 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26731 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.453 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:20 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26696 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:20 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:01:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226374028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.572 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.578 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:01:20 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.16896 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.602 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.603 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:01:20 np0005591760 nova_compute[248045]: 2026-01-22 10:01:20.604 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:01:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:20.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:20 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:20 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26708 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:21 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.16908 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 22 05:01:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325073477' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 05:01:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:01:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:01:22 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.605 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.605 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.605 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.619 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.619 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.619 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.619 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:01:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:22.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:22 np0005591760 nova_compute[248045]: 2026-01-22 10:01:22.919 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:24.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:24 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:24.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:25 np0005591760 nova_compute[248045]: 2026-01-22 10:01:25.455 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:01:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:01:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:27.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:27.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:27] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:01:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:27] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:01:27 np0005591760 nova_compute[248045]: 2026-01-22 10:01:27.922 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:28.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:28.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:28.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:28.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:28.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:29 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26791 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:29 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26756 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 22 05:01:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26812 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 22 05:01:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 05:01:30 np0005591760 ovs-vsctl[262537]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26771 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:30.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:30 np0005591760 nova_compute[248045]: 2026-01-22 10:01:30.456 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26833 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26789 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:30.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26848 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:30 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 05:01:30 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26810 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:31 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 05:01:31 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 05:01:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 22 05:01:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2007184355' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 05:01:31 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26878 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:31 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: cache status {prefix=cache status} (starting...)
Jan 22 05:01:31 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:31 np0005591760 lvm[262836]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:01:31 np0005591760 lvm[262836]: VG ceph_vg0 finished
Jan 22 05:01:31 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26843 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:31 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: client ls {prefix=client ls} (starting...)
Jan 22 05:01:31 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:31 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26899 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:31 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26858 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26923 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944491404' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:32.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17043 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3088653919' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17061 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:32.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26980 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T10:01:32.787+0000 7ff39ecc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3442447601' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 05:01:32 np0005591760 nova_compute[248045]: 2026-01-22 10:01:32.923 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 22 05:01:32 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2526018370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 05:01:32 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26992 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T10:01:32.969+0000 7ff39ecc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:01:32 np0005591760 ceph-mgr[74522]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:01:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17091 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 05:01:33 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262127750' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: ops {prefix=ops} (starting...)
Jan 22 05:01:33 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274486076' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27034 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27043 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19409585' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.26996 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: session ls {prefix=session ls} (starting...)
Jan 22 05:01:33 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 22 05:01:33 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1075329346' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17160 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:33 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27070 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: status {prefix=status} (starting...)
Jan 22 05:01:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27085 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478203752' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27097 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:34.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3881758010' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27047 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/931165606' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27118 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:34.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:34 np0005591760 podman[263489]: 2026-01-22 10:01:34.833870888 +0000 UTC m=+0.095141035 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:01:34 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27080 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412125977' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 22 05:01:34 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3605814426' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 05:01:34 np0005591760 podman[263489]: 2026-01-22 10:01:34.945096321 +0000 UTC m=+0.206366469 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 05:01:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27142 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27101 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27113 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:01:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T10:01:35.362+0000 7ff39ecc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27163 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:35 np0005591760 nova_compute[248045]: 2026-01-22 10:01:35.457 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27128 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:35 np0005591760 podman[263667]: 2026-01-22 10:01:35.565366212 +0000 UTC m=+0.082136356 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:01:35 np0005591760 podman[263667]: 2026-01-22 10:01:35.573967065 +0000 UTC m=+0.090737209 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27190 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:35 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27155 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 22 05:01:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253525838' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 05:01:36 np0005591760 podman[263811]: 2026-01-22 10:01:36.057021552 +0000 UTC m=+0.046993951 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27208 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:36 np0005591760 podman[263811]: 2026-01-22 10:01:36.093246869 +0000 UTC m=+0.083219269 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27223 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 22 05:01:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311619361' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 05:01:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:36.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:36 np0005591760 podman[263911]: 2026-01-22 10:01:36.402759571 +0000 UTC m=+0.094828706 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27229 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17331 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:36 np0005591760 podman[263911]: 2026-01-22 10:01:36.585119168 +0000 UTC m=+0.277188303 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27194 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27262 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:01:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:36.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:01:36 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 22 05:01:36 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3354906603' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 05:01:36 np0005591760 podman[264020]: 2026-01-22 10:01:36.847229019 +0000 UTC m=+0.050380930 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 05:01:36 np0005591760 podman[264020]: 2026-01-22 10:01:36.858989562 +0000 UTC m=+0.062141483 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17355 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:36 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17364 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:37.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:37.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:37.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:37.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:37 np0005591760 podman[264106]: 2026-01-22 10:01:37.115141474 +0000 UTC m=+0.074178365 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 05:01:37 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27286 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:37 np0005591760 podman[264137]: 2026-01-22 10:01:37.181868575 +0000 UTC m=+0.049832265 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., release=1793, version=2.2.4)
Jan 22 05:01:37 np0005591760 podman[264106]: 2026-01-22 10:01:37.187020553 +0000 UTC m=+0.146057443 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Jan 22 05:01:37 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27292 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:37 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27239 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983059013' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 05:01:37 np0005591760 podman[264182]: 2026-01-22 10:01:37.429238874 +0000 UTC m=+0.057050740 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:01:37 np0005591760 podman[264182]: 2026-01-22 10:01:37.465028741 +0000 UTC m=+0.092840587 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:01:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:01:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:01:37 np0005591760 podman[264277]: 2026-01-22 10:01:37.631625154 +0000 UTC m=+0.056274986 container exec 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:01:37 np0005591760 podman[264277]: 2026-01-22 10:01:37.636965438 +0000 UTC m=+0.061615269 container exec_died 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 05:01:37 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17397 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710513929' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006793 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.010908 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.010942 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996648788s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 242.400588989s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] exit Reset 0.000171 1 0.000255
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996540070s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.400588989s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.854102 1 0.000162
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007032 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active+remapped mbc={255={}}] exit Started/Primary 2.012545 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active+remapped mbc={255={}}] exit Started 2.012637 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997281075s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 54'1164 active pruub 242.401596069s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] exit Reset 0.000128 1 0.000164
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.997172356s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY pruub 242.401596069s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.914650 1 0.000187
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007853 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.815235 1 0.000099
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007399 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.012001 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.012102 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.013325 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.013484 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996450424s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 242.401687622s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[53,66)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996104240s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 242.401489258s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] exit Reset 0.000356 1 0.000678
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] exit Reset 0.000416 1 0.000535
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] exit Start 0.000086 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.996085167s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401687622s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] exit Start 0.000193 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 68 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68 pruub=14.995922089s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.401489258s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 6586368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fcb40000/0x0/0x4ffc00000, data 0x6d04d/0xd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 6586368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.254432 6 0.000096
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.252876 6 0.000302
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.254239 6 0.000064
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.252676 6 0.000537
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000995 2 0.000053
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001000 2 0.000016
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001038 2 0.000014
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=54'1164 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001081 2 0.000022
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 DELETING pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.065432 2 0.000143
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.066481 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.15( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.320974 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 DELETING pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102264 2 0.000084
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.103292 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.1d( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=5 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.356315 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] lb MIN local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 DELETING pi=[53,68)/1 crt=67'1165 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161512 2 0.000091
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] lb MIN local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=67'1165 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.162582 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.5( v 67'1165 (0'0,67'1165] lb MIN local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=67'1165 lcod 54'1164 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.416842 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 DELETING pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.220811 2 0.000099
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.221942 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 69 pg[9.d( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=66/67 n=6 ec=53/30 lis/c=66/53 les/c/f=67/54/0 sis=68) [2] r=-1 lpr=68 pi=[53,68)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.474887 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 6545408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fcb44000/0x0/0x4ffc00000, data 0x6eeb7/0xd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.794133186s of 10.912025452s, submitted: 171
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 6545408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 702006 data_alloc: 218103808 data_used: 176128
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 6545408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: mgrc handle_mgr_map Got map version 32
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1082790531,v1:192.168.122.100:6801/1082790531]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 6356992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.200266 48 0.000145
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.204313 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.204367 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.204409 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799471855s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.659103394s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] exit Reset 0.000116 1 0.000193
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.799409866s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.659103394s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.202356 48 0.000166
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.205358 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.205422 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.205471 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797591209s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.657760620s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] exit Reset 0.000107 1 0.000201
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.797536850s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657760620s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.203210 48 0.000177
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.206842 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.206887 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.206915 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796589851s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.657379150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] exit Reset 0.000043 1 0.000086
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.796569824s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.657379150s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.205119 48 0.000175
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.207669 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.207740 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.207791 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.795056343s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 241.656280518s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] exit Reset 0.000327 1 0.000431
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] exit Start 0.000105 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70 pruub=8.794763565s) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.656280518s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=0 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=0 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000037
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000106 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000148 1 0.000227
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=0 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=0 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000063
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000032 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000234 1 0.000098
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000716 2 0.000085
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000706 2 0.000064
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 70 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 70 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.527831 3 0.000044
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.527871 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000088 1 0.000125
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000046
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.522283 2 0.000125
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.522680 2 0.000130
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.523905 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.523322 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=59/60 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.528435 3 0.000042
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.529107 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000268 1 0.000960
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.527750 3 0.000189
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.528113 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000146 1 0.000385
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000047
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000057 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.530006 3 0.000047
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.530500 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=70) [1] r=-1 lpr=70 pi=[53,70)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000196 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000884
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000323 1 0.000820
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000111 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000136 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000046 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000256
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007525 5 0.000461
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000065 1 0.000029
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 lc 42'19 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007804 5 0.000753
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007842 1 0.000023
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.007625 1 0.000053
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000020 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.059193 1 0.000120
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 71 pg[6.6( v 43'42 (0'0,43'42] local-lis/les=70/71 n=2 ec=49/17 lis/c=70/59 les/c/f=71/60/0 sis=70) [0] r=0 lpr=70 pi=[59,70)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 6479872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 71 heartbeat osd_stat(store_statfs(0x4fcb3d000/0x0/0x4ffc00000, data 0x731b0/0xdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 71 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005126 4 0.000311
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005509 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006097 4 0.000339
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006476 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008694 4 0.000057
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008798 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007949 4 0.000086
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008064 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 72 ms_handle_reset con 0x5581de0bd800 session 0x5581de188960
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 6438912 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.437318 5 0.000203
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000159 1 0.000119
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.440026 5 0.000631
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.438973 5 0.000861
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.439769 5 0.000645
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001664 1 0.000027
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035484 2 0.000035
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.036274 1 0.000029
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001242 1 0.000019
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.030298 2 0.000030
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.067694 1 0.000073
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000494 1 0.000019
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045263 2 0.000026
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.113415 1 0.000026
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000616 1 0.000020
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038088 2 0.000031
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 72 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fcb3b000/0x0/0x4ffc00000, data 0x754a3/0xe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 6397952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 721875 data_alloc: 218103808 data_used: 176128
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 72 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.014379 1 0.000070
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.489203 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.497283 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.497309 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947658539s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.834869385s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] exit Reset 0.000127 1 0.000195
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947585106s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.834869385s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.898311 1 0.000090
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.983053 1 0.000058
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.491183 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.496720 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.496905 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948608398s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.836120605s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] exit Reset 0.000084 1 0.000149
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948558807s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836120605s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.937214 1 0.000050
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.490196 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.499024 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.499053 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.490888 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.497428 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.498193 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[53,71)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948017120s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.835922241s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] exit Reset 0.000184 1 0.000610
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948321342s) [1] async=[1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 250.836303711s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] exit Reset 0.000178 1 0.000762
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] exit Start 0.000064 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] exit Start 0.000088 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.948175430s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.836303711s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 73 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73 pruub=14.947887421s) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.835922241s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 6389760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.240096 61 0.000198
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.242923 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.243041 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.243122 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759959221s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657836914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] exit Reset 0.000070 1 0.000129
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.759923935s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=49) [0] r=0 lpr=49 crt=43'42 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 30.275335 70 0.000255
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=49) [0] r=0 lpr=49 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 30.276747 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.242936 61 0.000201
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.245756 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.245808 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.245860 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=49) [0] r=0 lpr=49 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 30.847113 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757369995s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.656463623s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] exit Reset 0.000039 1 0.000210
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=12.757349014s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656463623s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=49) [0] r=0 lpr=49 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 30.847400 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=49) [0] r=0 lpr=49 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729435921s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 active pruub 246.628738403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] exit Reset 0.000202 1 0.000849
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] exit Start 0.000087 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74 pruub=9.729285240s) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.628738403s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017461 7 0.000283
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018580 7 0.000076
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017555 7 0.000542
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018330 7 0.000071
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000179 1 0.000041
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000151 1 0.000135
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000261 1 0.000145
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000396 1 0.000144
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 74 handle_osd_map epochs [73,74], i have 74, src has [1,74]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 DELETING pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060855 2 0.000394
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061077 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.e( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.078756 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 DELETING pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097718 2 0.000171
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097944 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.1e( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.116597 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 DELETING pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.142408 2 0.000341
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.142743 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.6( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=6 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.160598 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 DELETING pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171641 2 0.000143
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172142 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 74 pg[9.16( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=71/72 n=5 ec=53/30 lis/c=71/53 les/c/f=72/54/0 sis=73) [1] r=-1 lpr=73 pi=[53,73)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.190566 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 6332416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016452 3 0.000029
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.016497 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000131 1 0.000174
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000079 1 0.000073
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018543 3 0.000032
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.018567 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=74) [2] r=-1 lpr=74 pi=[53,74)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000034 1 0.000047
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000126 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000159
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017018 6 0.000202
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001135 2 0.000545
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] lb MIN local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 DELETING pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001778 1 0.000037
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] lb MIN local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002964 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 75 pg[6.8( v 43'42 (0'0,43'42] lb MIN local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=74) [1] r=-1 lpr=74 pi=[49,74)/1 crt=43'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.020581 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fc727000/0x0/0x4ffc00000, data 0x7964c/0xe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 6316032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012774 4 0.000431
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012939 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=0 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000106 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=0 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000104
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000329 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000164 1 0.000440
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012697 4 0.000088
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.013586 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 76 handle_osd_map epochs [75,76], i have 76, src has [1,76]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 29.273066 67 0.000192
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 29.276148 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 29.276216 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 29.276345 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726813316s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657867432s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] exit Reset 0.000510 1 0.000795
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 29.275317 67 0.001871
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.726340294s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657867432s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 29.278278 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 29.278367 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 29.278392 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724657059s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.656433105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] exit Reset 0.000033 1 0.000525
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76 pruub=10.724640846s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.656433105s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002673 2 0.001652
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000036 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.005559 5 0.000808
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000070 1 0.000074
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000618 1 0.000039
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.005596 5 0.000954
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028301 2 0.000033
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.027961 1 0.000054
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000520 1 0.000025
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052366 2 0.000031
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 76 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 76 heartbeat osd_stat(store_statfs(0x4fc723000/0x0/0x4ffc00000, data 0x7b65a/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 6234112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001066 3 0.000547
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.001103 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000220 1 0.000247
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000379 2 0.000130
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004577 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.002181 3 0.000026
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.002206 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=76) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.915779 1 0.000049
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003763 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.017369 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.017510 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.970312 1 0.000043
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005540 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.018503 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.018523 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000308990s) [2] async=[2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 254.934616089s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] exit Reset 0.000105 1 0.000099
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000450 1 0.000464
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001207352s) [2] async=[2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 254.935806274s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] exit Reset 0.000606 1 0.001983
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.001164436s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.935806274s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001727 2 0.000037
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=76/57 les/c/f=77/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001150 4 0.000093
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=76/57 les/c/f=77/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=76/57 les/c/f=77/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[6.9( v 43'42 (0'0,43'42] local-lis/les=76/77 n=1 ec=49/17 lis/c=76/57 les/c/f=77/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=43'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] exit Start 0.000687 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77 pruub=15.000247955s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.934616089s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002485 2 0.000031
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 77 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 6094848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708202 data_alloc: 218103808 data_used: 167936
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.213058472s of 10.291069984s, submitted: 109
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 77 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 31.291466 73 0.000274
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 31.294277 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 31.294336 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 31.294373 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708597183s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657836914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] exit Reset 0.000088 1 0.000153
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.708545685s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657836914s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014820 3 0.000054
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.016639 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013531 3 0.000048
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.016079 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 31.293143 73 0.000221
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 31.296780 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 31.296911 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 31.296965 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706829071s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 249.657592773s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] exit Reset 0.000095 1 0.000164
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78 pruub=8.706775665s) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.657592773s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019064 7 0.000078
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000075 1 0.000042
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.004797 5 0.000358
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000084 1 0.000063
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000419 1 0.000036
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020633 7 0.000772
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.003740 5 0.001668
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 DELETING pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.056841 2 0.000397
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.056958 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.8( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=75/76 n=6 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.076060 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 78 heartbeat osd_stat(store_statfs(0x4fc71e000/0x0/0x4ffc00000, data 0x7f953/0xed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.090737 2 0.000053
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.090184 1 0.000048
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000729 1 0.000039
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052062 2 0.000047
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.143120 1 0.000051
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 DELETING pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037266 2 0.000199
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.180436 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 78 pg[9.18( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=75/76 n=5 ec=53/30 lis/c=75/53 les/c/f=76/54/0 sis=77) [2] r=-1 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.201803 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 6045696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004607 3 0.000040
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.004654 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006123 3 0.000043
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.006200 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=78) [1] r=-1 lpr=78 pi=[53,78)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000079 1 0.000118
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000049
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000145 1 0.000222
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000064
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.910095 1 0.000119
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006442 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.023117 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.023137 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998394966s) [2] async=[2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 256.954620361s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] exit Reset 0.000094 1 0.000143
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998344421s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.954620361s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.857820 1 0.000079
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006167 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.022259 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.022276 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[53,77)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998961449s) [2] async=[2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 256.955718994s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] exit Reset 0.000075 1 0.000103
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 79 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79 pruub=14.998907089s) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.955718994s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 6004736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001491 4 0.000087
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001684 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002656 4 0.000065
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002778 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 80 handle_osd_map epochs [80,80], i have 80, src has [1,80]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005835 7 0.000200
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000046 1 0.000044
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005964 7 0.000447
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000031 1 0.000031
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 DELETING pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045713 2 0.000190
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.045795 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.9( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=77/78 n=6 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.051793 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 DELETING pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097511 2 0.000091
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097646 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.19( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=77/78 n=5 ec=53/30 lis/c=77/53 les/c/f=78/54/0 sis=79) [2] r=-1 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.103647 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 80 handle_osd_map epochs [80,80], i have 80, src has [1,80]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.194313 5 0.000282
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.195330 5 0.000366
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000057 1 0.000054
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000624 1 0.000017
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063594 2 0.000049
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.064479 1 0.000626
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000506 1 0.000017
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031172 2 0.000062
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 80 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 80 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.178492 1 0.000054
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.438404 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.440107 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.146707 1 0.000082
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.440135 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.437424 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.440223 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.440252 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[53,79)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.757055283s) [1] async=[1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 259.152893066s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756841660s) [1] async=[1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 259.152709961s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] exit Reset 0.000113 1 0.000162
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756978989s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152893066s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] exit Reset 0.000120 1 0.000186
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 81 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81 pruub=15.756772041s) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.152709961s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 5922816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010453 7 0.000131
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000064 1 0.000109
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014935 7 0.000073
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000041 1 0.000032
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 DELETING pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.068120 2 0.000251
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068236 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.a( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=79/80 n=6 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.078788 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 DELETING pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.100552 2 0.000202
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.100660 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 82 pg[9.1a( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=79/80 n=5 ec=53/30 lis/c=79/53 les/c/f=80/54/0 sis=81) [1] r=-1 lpr=81 pi=[53,81)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.115632 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 5906432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fc714000/0x0/0x4ffc00000, data 0x8989c/0xf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 5906432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 665635 data_alloc: 218103808 data_used: 139264
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fc714000/0x0/0x4ffc00000, data 0x8989c/0xf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 5857280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=0 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=0 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000032
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000232 1 0.000108
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001373 2 0.000100
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 83 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 83 heartbeat osd_stat(store_statfs(0x4fc714000/0x0/0x4ffc00000, data 0x8989c/0xf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 5849088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 84 handle_osd_map epochs [83,84], i have 84, src has [1,84]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000995 2 0.000055
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.002671 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.001903 3 0.000174
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000155 1 0.000064
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007851 3 0.000052
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000028 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 84 pg[6.b( v 43'42 (0'0,43'42] local-lis/les=83/84 n=1 ec=49/17 lis/c=83/61 les/c/f=84/62/0 sis=83) [0] r=0 lpr=83 pi=[61,83)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 5849088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fc70f000/0x0/0x4ffc00000, data 0x8dac5/0xfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 5849088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fc70b000/0x0/0x4ffc00000, data 0x8fd4c/0xfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 5849088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 684324 data_alloc: 218103808 data_used: 139264
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.053511620s of 10.119697571s, submitted: 76
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 5832704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fc709000/0x0/0x4ffc00000, data 0x91fd3/0x101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 5816320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 87 handle_osd_map epochs [88,89], i have 87, src has [1,89]
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=43'42 mlcod 43'42 active+clean] exit Started/Primary/Active/Clean 19.994608 51 0.000220
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active 20.010195 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary 20.533580 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started 20.533653 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 88 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997930527s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 43'42 active pruub 265.397277832s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] exit Reset 0.000150 2 0.000220
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 89 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=11.997831345s) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY pruub 265.397277832s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 89 handle_osd_map epochs [88,89], i have 89, src has [1,89]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 5816320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006098 7 0.000063
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 crt=43'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=0 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=0 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000031
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000417 1 0.000049
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000604 2 0.000058
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.013407 2 0.000196
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.013445 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000071 1 0.000049
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] lb MIN local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 DELETING pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.009135 2 0.000157
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] lb MIN local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.009239 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 90 pg[6.e( v 43'42 (0'0,43'42] lb MIN local-lis/les=70/71 n=1 ec=49/17 lis/c=70/70 les/c/f=71/71/0 sis=88) [1] r=-1 lpr=88 pi=[70,88)/1 luod=0'0 crt=43'42 mlcod 0'0 active mbc={}] exit Started 1.028936 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 5767168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999184 2 0.000070
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.000285 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 0'0 (0'0,43'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001421 3 0.000416
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000087 1 0.000067
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 lc 42'1 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 91 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126415 3 0.000099
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 91 pg[6.f( v 43'42 (0'0,43'42] local-lis/les=90/91 n=1 ec=49/17 lis/c=90/61 les/c/f=91/62/0 sis=90) [0] r=0 lpr=90 pi=[61,90)/1 crt=43'42 mlcod 43'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 5677056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 712115 data_alloc: 218103808 data_used: 143360
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 5668864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0xa0709/0x118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 5652480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.e scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.e scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 725716 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.474132538s of 10.522734642s, submitted: 50
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 4595712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fc6f0000/0x0/0x4ffc00000, data 0xa26ab/0x11b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 4595712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.c scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.c scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 4595712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.177321 125 0.002098
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.181229 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 54.181289 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 54.181324 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823126793s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 273.659820557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] exit Reset 0.000083 1 0.000153
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 95 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95 pruub=9.823087692s) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659820557s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.644570 3 0.000045
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.822129 128 0.000525
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.826851 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 54.827675 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 54.827732 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176632881s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 273.659759521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.646416 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=95) [1] r=-1 lpr=95 pi=[53,95)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] exit Reset 0.000259 1 0.001867
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000521 1 0.002502
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] exit Start 0.000074 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96 pruub=9.176480293s) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.659759521s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000150 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001945 2 0.000411
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000069 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 96 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.b scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.b scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006503 3 0.000744
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.006750 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1] r=-1 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004417 3 0.000183
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006544 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000347 1 0.000442
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000094 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000093 1 0.000244
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000054 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002626 5 0.000646
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000095 1 0.000074
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000379 1 0.000059
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014291 2 0.000063
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 97 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741754 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.993310 1 0.000131
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011199 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.017793 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.018039 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[53,96)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991450310s) [1] async=[1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 281.493499756s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] exit Reset 0.000103 1 0.000222
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98 pruub=14.991397858s) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.493499756s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010886 4 0.000166
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.011163 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.004412 5 0.000570
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000115 1 0.000081
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000422 1 0.000120
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035440 2 0.000078
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 98 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 4587520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fc6e1000/0x0/0x4ffc00000, data 0xaa882/0x127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.970176 1 0.000163
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011094 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.022295 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.022446 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[53,97)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.993288994s) [1] async=[1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 282.506774902s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] exit Reset 0.000417 1 0.000523
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99 pruub=14.992941856s) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 282.506774902s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022044 7 0.000105
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000080 1 0.000050
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 DELETING pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.016376 2 0.000186
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.016496 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 99 pg[9.10( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=96/97 n=6 ec=53/30 lis/c=96/53 les/c/f=97/54/0 sis=98) [1] r=-1 lpr=98 pi=[53,98)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.038582 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016292 7 0.000146
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000146 1 0.000115
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 DELETING pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038941 2 0.000375
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039271 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 100 pg[9.11( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=97/98 n=6 ec=53/30 lis/c=97/53 les/c/f=98/54/0 sis=99) [1] r=-1 lpr=99 pi=[53,99)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.055648 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740303 data_alloc: 218103808 data_used: 143360
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fc6de000/0x0/0x4ffc00000, data 0xae775/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.835780144s of 10.894430161s, submitted: 101
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 62.889094 144 0.001609
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 62.892661 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 62.892744 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 62.892791 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=43'1161 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.111286163s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 281.659820557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] exit Reset 0.000350 1 0.000473
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] exit Start 0.000090 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 101 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=9.110987663s) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.659820557s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 4620288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.867183 3 0.000175
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.867348 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=101) [1] r=-1 lpr=101 pi=[53,101)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000177 1 0.000242
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000060 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000245
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 102 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fc6d9000/0x0/0x4ffc00000, data 0xb2835/0x132000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999861 4 0.000148
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000158 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=53/54 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 103 handle_osd_map epochs [102,103], i have 103, src has [1,103]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=53/53 les/c/f=54/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.309296 5 0.000365
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000092 1 0.000075
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000409 1 0.000030
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031303 2 0.000075
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 103 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.818989 1 0.000091
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.160348 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.160563 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.160678 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[53,102)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148774147s) [1] async=[1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 active pruub 290.725952148s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] exit Reset 0.000206 1 0.000304
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] exit Start 0.000123 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 104 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.148613930s) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.725952148s@ mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 760111 data_alloc: 218103808 data_used: 143360
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006496 7 0.000346
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000074 1 0.000108
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 DELETING pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030873 2 0.000213
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031036 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 105 pg[9.12( v 43'1161 (0'0,43'1161] lb MIN local-lis/les=102/103 n=6 ec=53/30 lis/c=102/53 les/c/f=103/54/0 sis=104) [1] r=-1 lpr=104 pi=[53,104)/1 crt=43'1161 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.037757 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fc6d1000/0x0/0x4ffc00000, data 0xb897f/0x13a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 4571136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 107 heartbeat osd_stat(store_statfs(0x4fc6cb000/0x0/0x4ffc00000, data 0xbca3f/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765416 data_alloc: 218103808 data_used: 139264
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 107 handle_osd_map epochs [108,109], i have 107, src has [1,109]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fc6cb000/0x0/0x4ffc00000, data 0xbca3f/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.1c deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.1c deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.097275734s of 10.158128738s, submitted: 47
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fc6c4000/0x0/0x4ffc00000, data 0xc0b35/0x146000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 4636672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786093 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5668864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5668864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 5652480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fc6bc000/0x0/0x4ffc00000, data 0xc6a49/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 112 handle_osd_map epochs [113,114], i have 112, src has [1,114]
Jan 22 05:01:37 np0005591760 nova_compute[248045]: 2026-01-22 10:01:37.924 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 112 handle_osd_map epochs [113,114], i have 114, src has [1,114]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 5595136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=0 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000796 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=0 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000256 1 0.000282
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000072 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000260 1 0.000183
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000154 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000470 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 5586944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803235 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002474 2 0.000235
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002996 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003111 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=115) [0] r=0 lpr=115 pi=[79,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000066 1 0.000113
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5578752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.027999878s of 10.070414543s, submitted: 41
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.003281 5 0.000042
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=79/79 les/c/f=80/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 42'830 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001740 4 0.000137
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 42'830 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 42'830 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000120 1 0.000059
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 lc 42'830 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049810 1 0.000070
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 117 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 5562368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.656036 1 0.000037
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.708082 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started 1.711418 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[79,116)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Reset 0.000229 1 0.000615
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Start 0.000110 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000197 2 0.000225
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=45
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=45
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000566 2 0.000095
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 5570560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=0 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000074 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=0 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000040
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000056
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000063 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000234 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=0 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=0 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000046
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000436 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000137 1 0.000545
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003452 2 0.000099
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004323 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=116/117 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000059 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000587 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.282317 2 0.000089
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.282570 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.282591 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=117) [0] r=0 lpr=118 pi=[81,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 119 handle_osd_map epochs [118,119], i have 119, src has [1,119]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000146 1 0.000171
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=116/79 les/c/f=117/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=118/79 les/c/f=119/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002209 4 0.000146
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=118/79 les/c/f=119/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=118/79 les/c/f=119/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 119 pg[9.19( v 43'1161 (0'0,43'1161] local-lis/les=118/119 n=5 ec=53/30 lis/c=118/79 les/c/f=119/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc6a7000/0x0/0x4ffc00000, data 0xd2e13/0x162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 5554176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.005752 5 0.000045
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=81/81 les/c/f=82/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.006522 2 0.000475
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007526 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.008022 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=119) [0] r=0 lpr=119 pi=[64,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000139 1 0.000565
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000043 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 42'857 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002705 4 0.000189
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 42'857 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 42'857 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000074 1 0.000053
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 lc 42'857 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.029625 1 0.000042
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 120 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fc6a6000/0x0/0x4ffc00000, data 0xd4f39/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842888 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.968233 1 0.000061
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.000849 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started 2.006656 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[81,119)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Reset 0.000243 1 0.000395
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Start 0.000085 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.000856 6 0.000136
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=64/64 les/c/f=65/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003345 2 0.000345
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 42'838 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004630 3 0.000100
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 42'838 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 42'838 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000034 1 0.000029
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 lc 42'838 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002219 2 0.000121
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.015588 1 0.000020
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 121 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 4440064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.983285 1 0.000024
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.003614 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started 2.004567 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[64,120)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Reset 0.000081 1 0.000121
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998414 2 0.000099
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004153 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=119/120 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000316 2 0.000043
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 122 handle_osd_map epochs [121,122], i have 122, src has [1,122]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000599 2 0.000058
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=119/81 les/c/f=120/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=121/81 les/c/f=122/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001280 3 0.000080
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=121/81 les/c/f=122/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=121/81 les/c/f=122/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 122 pg[9.1a( v 43'1161 (0'0,43'1161] local-lis/les=121/122 n=5 ec=53/30 lis/c=121/81 les/c/f=122/82/0 sis=121) [0] r=0 lpr=121 pi=[81,121)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4546560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 122 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012916 2 0.000068
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013886 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=120/121 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=120/64 les/c/f=121/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=122/64 les/c/f=123/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001450 4 0.000119
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=122/64 les/c/f=123/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=122/64 les/c/f=123/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 123 pg[9.1b( v 43'1161 (0'0,43'1161] local-lis/les=122/123 n=5 ec=53/30 lis/c=122/64 les/c/f=123/65/0 sis=122) [0] r=0 lpr=122 pi=[64,122)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 4530176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 4521984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc697000/0x0/0x4ffc00000, data 0xdcf43/0x173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 4513792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858773 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc699000/0x0/0x4ffc00000, data 0xdcf43/0x173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4489216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc699000/0x0/0x4ffc00000, data 0xdcf43/0x173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 123 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.364282608s of 10.430208206s, submitted: 74
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc695000/0x0/0x4ffc00000, data 0xdf02f/0x176000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875654 data_alloc: 218103808 data_used: 147456
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 126 handle_osd_map epochs [127,128], i have 126, src has [1,128]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc689000/0x0/0x4ffc00000, data 0xe7061/0x182000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 4317184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883882 data_alloc: 218103808 data_used: 151552
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc68a000/0x0/0x4ffc00000, data 0xe7061/0x182000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=0 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=0 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000059
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000163 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000275
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000054 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000306 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc686000/0x0/0x4ffc00000, data 0xe9003/0x185000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.258762360s of 10.294144630s, submitted: 30
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.541670 2 0.000124
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.542086 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.542321 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000643 1 0.001062
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000310 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f(unlocked)] enter Initial
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=0 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=0 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000145 1 0.000058
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000048 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000206 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005120 5 0.000567
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 42'992 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001894 4 0.000204
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 42'992 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 42'992 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000068
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 lc 42'992 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035659 1 0.000041
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 132 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 5324800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 132 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964673 1 0.000066
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.002414 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started 2.007960 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Reset 0.000075 1 0.000122
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000039
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003594 2 0.000071
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003874 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.004333 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=132) [0] r=0 lpr=132 pi=[93,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000212 1 0.000734
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=30
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=30
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001438 2 0.000042
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000109 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc67b000/0x0/0x4ffc00000, data 0xef1d0/0x18e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5275648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909802 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc67b000/0x0/0x4ffc00000, data 0xef1d0/0x18e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 133 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999561 3 0.000074
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001127 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=131/132 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.000174 5 0.000380
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 0'0 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=93/93 les/c/f=94/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 crt=43'1161 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/73 les/c/f=134/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001609 4 0.000499
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/73 les/c/f=134/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/73 les/c/f=134/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000046 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1e( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/73 les/c/f=134/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 42'851 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001855 4 0.000707
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 42'851 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 42'851 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000046 1 0.000048
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 lc 42'851 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035737 1 0.000055
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 134 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc67a000/0x0/0x4ffc00000, data 0xf11c1/0x191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 5251072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964101 1 0.000037
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.001870 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] exit Started 2.002323 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=133) [0]/[1] r=-1 lpr=133 pi=[93,133)/1 luod=0'0 crt=43'1161 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 luod=0'0 crt=43'1161 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Reset 0.000095 1 0.000142
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Start
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000044
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=0/0 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: merge_log_dups log.dups.size()=0olog.dups.size()=33
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=33
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001569 2 0.000052
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 135 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 5251072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002983 3 0.000106
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004671 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=133/134 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=133/93 les/c/f=134/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=135/93 les/c/f=136/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001454 4 0.001194
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=135/93 les/c/f=136/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=135/93 les/c/f=136/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 pg_epoch: 136 pg[9.1f( v 43'1161 (0'0,43'1161] local-lis/les=135/136 n=5 ec=53/30 lis/c=135/93 les/c/f=136/94/0 sis=135) [0] r=0 lpr=135 pi=[93,135)/1 crt=43'1161 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 5251072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 5226496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 5226496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 5226496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 5218304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 5218304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 5218304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5210112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5210112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5210112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 5201920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 5193728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 5185536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 5185536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 5177344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 5177344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 5177344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 5169152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 5169152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 5160960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 5160960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 5160960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 5152768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 5152768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5144576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5144576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 5136384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5128192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5128192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5120000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5120000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5120000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5095424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5095424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 5079040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 5079040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 5079040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5054464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5054464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5038080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5038080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df327000 session 0x5581ddfd1c20
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd6bc00 session 0x5581dbfe94a0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4997120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4997120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd87c00 session 0x5581de12e3c0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de40d400 session 0x5581de218d20
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 4980736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 4980736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928573 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 73.992576599s of 74.026023865s, submitted: 50
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928705 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc670000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 4939776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 4931584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927997 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 4931584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.547247887s of 10.549683571s, submitted: 2
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928327 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928195 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de40c400 session 0x5581dedfb860
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.716902733s of 12.721277237s, submitted: 4
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928063 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4857856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4857856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4849664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4849664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928063 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928195 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc003800 session 0x5581de12e3c0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df316000 session 0x5581dedfa780
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.327797890s of 14.330419540s, submitted: 2
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929707 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929116 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd87000 session 0x5581df2c4000
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de72b800 session 0x5581de69de00
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.396247864s of 10.400572777s, submitted: 3
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929116 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4743168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4743168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 4734976 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 4718592 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929116 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 4718592 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929248 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4694016 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4694016 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 4685824 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.474945068s of 15.477352142s, submitted: 2
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 4685824 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930760 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 4677632 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 4677632 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 4677632 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4669440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4669440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930628 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de72b800 session 0x5581dbfe94a0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930496 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:37 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 4636672 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 4636672 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930496 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4628480 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.916780472s of 16.920440674s, submitted: 3
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4628480 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932140 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932140 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931549 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4579328 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4579328 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4571136 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4571136 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.595003128s of 17.598495483s, submitted: 3
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931417 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd87400 session 0x5581df2c45a0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4554752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4554752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 4546560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931417 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 4546560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931417 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.593015671s of 14.594431877s, submitted: 1
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 4497408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931549 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 4464640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933061 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 4440064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 4440064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 4431872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932470 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 4431872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 4431872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4423680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.810277939s of 14.815903664s, submitted: 3
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de1e0800 session 0x5581de12e960
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc002400 session 0x5581df2c43c0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4423680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932338 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932338 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.853135109s of 10.854328156s, submitted: 1
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932470 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 4349952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935494 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4333568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935494 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.043842316s of 12.046784401s, submitted: 3
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934903 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4268032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4268032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4268032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 4259840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 4259840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4251648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4251648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4251648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 4243456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 4243456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 4235264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 4227072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 4227072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 4218880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 4218880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 4210688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4145152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4145152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4096000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc000400 session 0x5581df173860
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc000800 session 0x5581df173680
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.925277710s of 60.927970886s, submitted: 2
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8499 writes, 32K keys, 8499 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 8499 writes, 2181 syncs, 3.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8499 writes, 32K keys, 8499 commit groups, 1.0 writes per commit group, ingest: 20.89 MB, 0.03 MB/s#012Interval WAL: 8499 writes, 2181 syncs, 3.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934903 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 3956736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 3956736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936415 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 3940352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 3923968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.039274216s of 12.042833328s, submitted: 3
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 3923968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935233 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 3923968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3915776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3915776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3915776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 3907584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 3907584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 3899392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3883008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3883008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3874816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3874816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3874816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3866624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3866624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3858432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3858432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 3850240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 3850240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 3850240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 3842048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 3842048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3833856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3833856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3825664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3825664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3825664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de1de400 session 0x5581de6a21e0
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd27000 session 0x5581df2c5e00
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3817472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3817472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3817472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3809280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3792896 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3792896 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3784704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.245330811s of 38.248317719s, submitted: 2
Jan 22 05:01:37 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3784704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3776512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935233 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3760128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3760128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938257 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3760128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3751936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3751936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3743744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3743744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937666 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3743744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3735552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3735552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.062061310s of 16.066551208s, submitted: 4
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3727360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3727360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3727360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3719168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3719168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3645440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df351000 session 0x5581ddfd2960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de1de000 session 0x5581ddfd0f00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.643405914s of 28.714008331s, submitted: 119
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 3186688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 3186688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 3186688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.088833809s of 12.240459442s, submitted: 237
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939178 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939178 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.970689774s of 12.974460602s, submitted: 3
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de159800 session 0x5581de142780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581db44f400 session 0x5581df2c4000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937864 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937864 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.103719711s of 12.104599953s, submitted: 1
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df317c00 session 0x5581dfa78000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.872491837s of 16.874778748s, submitted: 2
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937273 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937273 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df326000 session 0x5581dee7eb40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df314000 session 0x5581de12e3c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.118247986s of 23.120439529s, submitted: 2
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937273 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938917 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940429 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940429 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.631116867s of 19.636159897s, submitted: 4
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd88400 session 0x5581df2c5860
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd27000 session 0x5581dee7e000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 3096576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581db44cc00 session 0x5581de0ae960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dcf5e800 session 0x5581de0ae780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 3096576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.061443329s of 17.062528610s, submitted: 1
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940561 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942073 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.044746399s of 12.049272537s, submitted: 4
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940891 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de40c000 session 0x5581dfa78960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df314400 session 0x5581dfa78780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940627 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940627 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.199202538s of 15.202738762s, submitted: 3
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940759 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de72b800 session 0x5581de6a3e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de158000 session 0x5581df2c4d20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940759 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940759 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.810004234s of 14.811164856s, submitted: 1
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940891 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942271 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942271 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942271 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.929155350s of 17.932794571s, submitted: 3
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df325400 session 0x5581dfa79680
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 75.419815063s of 75.422309875s, submitted: 1
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943783 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943192 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943192 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.829685211s of 16.833559036s, submitted: 3
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc9e7400 session 0x5581dcb7ad20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd6b400 session 0x5581ddd063c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df327400 session 0x5581de00cf00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df316c00 session 0x5581ded763c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 105.914726257s of 105.915969849s, submitted: 1
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943324 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944836 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.300821304s of 10.306298256s, submitted: 4
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945757 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945625 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.359743118s of 10.379639626s, submitted: 25
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 139 ms_handle_reset con 0x5581ddd86800 session 0x5581ded77a40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 2908160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 2883584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959926 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 139 ms_handle_reset con 0x5581ddd86800 session 0x5581de00c1e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 2842624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc664000/0x0/0x4ffc00000, data 0xfd48f/0x1a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xff597/0x1a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xff597/0x1a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 2924544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 2924544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 2908160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 2908160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581ded91800 session 0x5581de6a3e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581df315c00 session 0x5581ded99e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581db44ec00 session 0x5581ded99a40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.062061310s of 38.074607849s, submitted: 38
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581ddd89c00 session 0x5581ded99860
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 2891776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581db44ec00 session 0x5581ded990e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581ddd86800 session 0x5581ded98960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 2883584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 2875392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581ded91800 session 0x5581ded98000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581db44dc00 session 0x5581df50c1e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581df316400 session 0x5581df50c3c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581df316400 session 0x5581df50c5a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581db44dc00 session 0x5581df50c960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 2867200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972498 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 2867200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc658000/0x0/0x4ffc00000, data 0x1057a5/0x1b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 2867200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 2859008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 2859008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 2859008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972498 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc658000/0x0/0x4ffc00000, data 0x1057a5/0x1b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974492 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc656000/0x0/0x4ffc00000, data 0x107777/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc656000/0x0/0x4ffc00000, data 0x107777/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.917074203s of 17.944137573s, submitted: 28
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90275840 unmapped: 3006464 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022404 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 2924544 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 2924544 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002c00 session 0x5581ded772c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd88400 session 0x5581de16a780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028316 data_alloc: 218103808 data_used: 163840
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40c000 session 0x5581dfa79a40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581dfa78780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028316 data_alloc: 218103808 data_used: 163840
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.525476456s of 14.562845230s, submitted: 39
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028448 data_alloc: 218103808 data_used: 163840
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90267648 unmapped: 3014656 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df327000 session 0x5581dfa790e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581dee7e960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90267648 unmapped: 3014656 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b400 session 0x5581dee7f680
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351800 session 0x5581de8cdc20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 15704064 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb70d000/0x0/0x4ffc00000, data 0x10507d9/0x10ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092105 data_alloc: 218103808 data_used: 163840
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092105 data_alloc: 218103808 data_used: 163840
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb70d000/0x0/0x4ffc00000, data 0x10507d9/0x10ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.144878387s of 11.177850723s, submitted: 35
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df230000 session 0x5581df44b0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb70d000/0x0/0x4ffc00000, data 0x10507d9/0x10ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90595328 unmapped: 15810560 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 11755520 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb6e8000/0x0/0x4ffc00000, data 0x10747fc/0x1124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153696 data_alloc: 218103808 data_used: 8896512
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153564 data_alloc: 218103808 data_used: 8896512
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb6e8000/0x0/0x4ffc00000, data 0x10747fc/0x1124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.294046402s of 10.303358078s, submitted: 9
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 4390912 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3702784 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa2fa000/0x0/0x4ffc00000, data 0x12ba7fc/0x136a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3702784 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3702784 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa2fa000/0x0/0x4ffc00000, data 0x12ba7fc/0x136a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181628 data_alloc: 218103808 data_used: 9027584
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000800 session 0x5581ddf1e3c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44dc00 session 0x5581df44b4a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e0400 session 0x5581dee7ef00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa9ae000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035432 data_alloc: 218103808 data_used: 163840
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44d800 session 0x5581df50cb40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.000518799s of 12.062762260s, submitted: 87
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf78800 session 0x5581de2192c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985314 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581dfa79860
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d000 session 0x5581de15b4a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985314 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9768 writes, 35K keys, 9768 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9768 writes, 2784 syncs, 3.51 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1269 writes, 2760 keys, 1269 commit groups, 1.0 writes per commit group, ingest: 2.21 MB, 0.00 MB/s#012Interval WAL: 1269 writes, 603 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985314 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.208404541s of 16.214429855s, submitted: 8
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985446 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44f800 session 0x5581de0ae1e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96124928 unmapped: 13434880 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd6000/0x0/0x4ffc00000, data 0x5d9767/0x686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 14229504 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023564 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581dee7fa40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581ddfd23c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44f800 session 0x5581ddfd03c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf78800 session 0x5581ddfd0d20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 14745600 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 14745600 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059359 data_alloc: 218103808 data_used: 5009408
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 12894208 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 12894208 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059359 data_alloc: 218103808 data_used: 5009408
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.747314453s of 16.761581421s, submitted: 13
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 12894208 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 8904704 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa104000/0x0/0x4ffc00000, data 0x10a979a/0x1158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153515 data_alloc: 218103808 data_used: 5873664
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa067000/0x0/0x4ffc00000, data 0x114679a/0x11f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa067000/0x0/0x4ffc00000, data 0x114679a/0x11f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152083 data_alloc: 218103808 data_used: 5873664
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x116a79a/0x1219000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x116a79a/0x1219000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.747667313s of 13.811450958s, submitted: 99
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152003 data_alloc: 218103808 data_used: 5873664
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa03c000/0x0/0x4ffc00000, data 0x117179a/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152307 data_alloc: 218103808 data_used: 5881856
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa030000/0x0/0x4ffc00000, data 0x117d79a/0x122c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 9322496 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df230c00 session 0x5581dedfaf00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581dded9a40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e000 session 0x5581de16a780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa030000/0x0/0x4ffc00000, data 0x117d79a/0x122c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581ded510e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351000 session 0x5581df44a780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.485546112s of 27.501417160s, submitted: 25
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001800 session 0x5581ddf55680
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12943360 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df326400 session 0x5581de142f00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1de000 session 0x5581ddd065a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12918784 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12918784 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40cc00 session 0x5581de165e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12935168 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029680 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd89400 session 0x5581dcb7b2c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12935168 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca5000/0x0/0x4ffc00000, data 0x50a767/0x5b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446400 session 0x5581de1421e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446800 session 0x5581de1430e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 12623872 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 13320192 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fac80000/0x0/0x4ffc00000, data 0x52e777/0x5dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062130 data_alloc: 218103808 data_used: 4358144
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fac80000/0x0/0x4ffc00000, data 0x52e777/0x5dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062262 data_alloc: 218103808 data_used: 4358144
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fac80000/0x0/0x4ffc00000, data 0x52e777/0x5dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.377078056s of 14.464314461s, submitted: 133
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 11870208 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103817216 unmapped: 6799360 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151431 data_alloc: 218103808 data_used: 5480448
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 7364608 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 7364608 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147603 data_alloc: 218103808 data_used: 5480448
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 7364608 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147603 data_alloc: 218103808 data_used: 5480448
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.256039619s of 14.299996376s, submitted: 61
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 7348224 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581dee7e1e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e400 session 0x5581df2845a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d800 session 0x5581df173a40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [1])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581de69e780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ded8f400 session 0x5581de6a2000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e400 session 0x5581de16ba40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581de218780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d800 session 0x5581dee7f0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581dbf28f00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103514112 unmapped: 15114240 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 15015936 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 15015936 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df231800 session 0x5581ded99680
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 14852096 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212746 data_alloc: 218103808 data_used: 5480448
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 12017664 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x16a7787/0x1756000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263950 data_alloc: 234881024 data_used: 12967936
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x16a7787/0x1756000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de158400 session 0x5581de1645a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86800 session 0x5581ddedd0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263950 data_alloc: 234881024 data_used: 12967936
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x16a7787/0x1756000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.902395248s of 15.079626083s, submitted: 262
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 4235264 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 5062656 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 5062656 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9682000/0x0/0x4ffc00000, data 0x1b2b787/0x1bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df315c00 session 0x5581dded7c20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de0bd400 session 0x5581df44a960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 5029888 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 5029888 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308938 data_alloc: 234881024 data_used: 13201408
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9682000/0x0/0x4ffc00000, data 0x1b2b787/0x1bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e400 session 0x5581dbfe8780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581de8cdc20
Jan 22 05:01:38 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86800 session 0x5581ddedda40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159570 data_alloc: 218103808 data_used: 5480448
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.566994667s of 12.633753777s, submitted: 97
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e0800 session 0x5581dfa79c20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581de16ad20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf79c00 session 0x5581de16be00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019565 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021998 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021998 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.935996056s of 11.948678970s, submitted: 16
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 15155200 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021275 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 15155200 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 15147008 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350400 session 0x5581dc0970e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102350848 unmapped: 16277504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102350848 unmapped: 16277504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc003800 session 0x5581dcb7b2c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446000 session 0x5581df1725a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102350848 unmapped: 16277504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046543 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.933603287s of 10.947580338s, submitted: 14
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001800 session 0x5581de16bc20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102359040 unmapped: 16269312 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102359040 unmapped: 16269312 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061068 data_alloc: 218103808 data_used: 2031616
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 16908288 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 16908288 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061068 data_alloc: 218103808 data_used: 2031616
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 16908288 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.591248512s of 10.594951630s, submitted: 4
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105873408 unmapped: 12754944 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x106d78a/0x111b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 12476416 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 12476416 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 12476416 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169896 data_alloc: 218103808 data_used: 2195456
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 12468224 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 12468224 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 12468224 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa116000/0x0/0x4ffc00000, data 0x109878a/0x1146000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa114000/0x0/0x4ffc00000, data 0x109a78a/0x1148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165672 data_alloc: 218103808 data_used: 2195456
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581dded7e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.979077339s of 12.066130638s, submitted: 130
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x109b78a/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165896 data_alloc: 218103808 data_used: 2195456
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x109b78a/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x109b78a/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165420 data_alloc: 218103808 data_used: 2199552
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa112000/0x0/0x4ffc00000, data 0x109c78a/0x114a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.518879890s of 10.523790359s, submitted: 4
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 13131776 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165560 data_alloc: 218103808 data_used: 2199552
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd89800 session 0x5581ded503c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd26400 session 0x5581de6a34a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ded8e800 session 0x5581dbf283c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44f000 session 0x5581ddd4bc20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350800 session 0x5581de00d4a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x150978a/0x15b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x150978a/0x15b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x150978a/0x15b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e1000 session 0x5581de16a3c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd89400 session 0x5581de8cc000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205533 data_alloc: 218103808 data_used: 2199552
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581deda6f00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351c00 session 0x5581de0af860
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105709568 unmapped: 21315584 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c80000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c7f000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239971 data_alloc: 218103808 data_used: 6553600
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.457146645s of 14.484023094s, submitted: 27
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c7f000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c7f000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239839 data_alloc: 218103808 data_used: 6553600
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c6e000/0x0/0x4ffc00000, data 0x153f79a/0x15ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 15056896 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a4f000/0x0/0x4ffc00000, data 0x175879a/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265765 data_alloc: 218103808 data_used: 6672384
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a35000/0x0/0x4ffc00000, data 0x177879a/0x1827000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264093 data_alloc: 218103808 data_used: 6672384
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.666911125s of 13.710399628s, submitted: 76
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 15024128 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a2b000/0x0/0x4ffc00000, data 0x178279a/0x1831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a2b000/0x0/0x4ffc00000, data 0x178279a/0x1831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 15024128 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 15024128 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264037 data_alloc: 218103808 data_used: 6672384
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350c00 session 0x5581df2c54a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581de188960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581de164000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa10f000/0x0/0x4ffc00000, data 0x109f78a/0x114d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175510 data_alloc: 218103808 data_used: 2203648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc9e7c00 session 0x5581de16a780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d000 session 0x5581de8cc000
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041958 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041958 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041958 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581dcb7b2c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581de69f0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581de69eb40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581de69e1e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.422857285s of 27.459486008s, submitted: 49
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581de16ab40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 22396928 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079250 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab54000/0x0/0x4ffc00000, data 0x65b767/0x708000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 22388736 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 22388736 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581de16be00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581de16bc20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581de16b0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581de16b860
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 22380544 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 22380544 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 21143552 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120212 data_alloc: 218103808 data_used: 5742592
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x65b78a/0x709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 21143552 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 21127168 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 21127168 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581ddedd0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581de16ad20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581df1725a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045868 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045868 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.282363892s of 20.319124222s, submitted: 42
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446400 session 0x5581de219a40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 27197440 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118840 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 27197440 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa75e000/0x0/0x4ffc00000, data 0xa51767/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118840 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351400 session 0x5581de0aed20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 22863872 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa75e000/0x0/0x4ffc00000, data 0xa51767/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 22863872 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dffe0c00 session 0x5581de69e5a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df325c00 session 0x5581de728960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581df44ba40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72a800 session 0x5581de1652c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002000 session 0x5581dcb7ad20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.647817612s of 26.688156128s, submitted: 45
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d000 session 0x5581df44a780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079825 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108229 data_alloc: 218103808 data_used: 4349952
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108229 data_alloc: 218103808 data_used: 4349952
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.121374130s of 14.125967026s, submitted: 2
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 25067520 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109420544 unmapped: 24961024 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 24952832 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 24952832 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 24952832 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140959 data_alloc: 218103808 data_used: 4636672
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140827 data_alloc: 218103808 data_used: 4636672
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e0400 session 0x5581ddedc5a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 24936448 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.648491859s of 13.678412437s, submitted: 21
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002000 session 0x5581de16a3c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054817 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054817 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054817 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002c00 session 0x5581dcb7ab40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446c00 session 0x5581dded83c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581dee7f2c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001000 session 0x5581df44ad20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.103810310s of 15.110246658s, submitted: 7
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002000 session 0x5581de6a2780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002c00 session 0x5581de6a0f00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581ddf54780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446c00 session 0x5581de16a960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350c00 session 0x5581de6a1e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131456 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa6f6000/0x0/0x4ffc00000, data 0xab77d9/0xb66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc003c00 session 0x5581dee7fc20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40dc00 session 0x5581de8cd0e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002400 session 0x5581de15ba40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001400 session 0x5581df1721e0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 26443776 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 23773184 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191323 data_alloc: 218103808 data_used: 8990720
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 23773184 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 23773184 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa6f5000/0x0/0x4ffc00000, data 0xab77fc/0xb67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191323 data_alloc: 218103808 data_used: 8990720
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581de729c20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001400 session 0x5581de15ad20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002400 session 0x5581de189e00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc003c00 session 0x5581df50c780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.102285385s of 13.141888618s, submitted: 41
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581de218f00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa6f5000/0x0/0x4ffc00000, data 0xab77fc/0xb67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 22568960 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 19218432 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 17113088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300899 data_alloc: 234881024 data_used: 9269248
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 17113088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 16547840 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9603000/0x0/0x4ffc00000, data 0x17987fc/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 10903552 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 10903552 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 10903552 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347715 data_alloc: 234881024 data_used: 16130048
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9603000/0x0/0x4ffc00000, data 0x17987fc/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 10649600 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0x17b77fc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 10649600 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0x17b77fc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343099 data_alloc: 234881024 data_used: 16134144
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0x17b77fc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.721275330s of 13.793152809s, submitted: 87
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 7127040 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 7061504 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 7061504 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 7061504 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416605 data_alloc: 234881024 data_used: 16449536
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127328256 unmapped: 7053312 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c45000/0x0/0x4ffc00000, data 0x21577fc/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 7036928 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c45000/0x0/0x4ffc00000, data 0x21577fc/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 7004160 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413933 data_alloc: 234881024 data_used: 16449536
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c1e000/0x0/0x4ffc00000, data 0x217e7fc/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c1e000/0x0/0x4ffc00000, data 0x217e7fc/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413933 data_alloc: 234881024 data_used: 16449536
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c1e000/0x0/0x4ffc00000, data 0x217e7fc/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df4d0400 session 0x5581ddf55c20
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.627637863s of 16.701566696s, submitted: 114
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001400 session 0x5581de2185a0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247843 data_alloc: 234881024 data_used: 9273344
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x10d97fc/0x1189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44cc00 session 0x5581de12fa40
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 13762560 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df317000 session 0x5581de15be00
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074156 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.855703354s of 18.884504318s, submitted: 43
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc9e7000 session 0x5581dcb7a780
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df4d0000 session 0x5581ddd4a3c0
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142047 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190383 data_alloc: 218103808 data_used: 7307264
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190383 data_alloc: 218103808 data_used: 7307264
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.366592407s of 16.391584396s, submitted: 36
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 24051712 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272121 data_alloc: 218103808 data_used: 7716864
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab54000/0x0/0x4ffc00000, data 0x12697c9/0x1317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268097 data_alloc: 218103808 data_used: 7720960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x126b7c9/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread fragmentation_score=0.000400 took=0.000035s
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x126b7c9/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x126b7c9/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268097 data_alloc: 218103808 data_used: 7720960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.798171997s of 12.866126060s, submitted: 121
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab52000/0x0/0x4ffc00000, data 0x126c7c9/0x131a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e1400 session 0x5581de6a0960
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ded8e400 session 0x5581dcb7b680
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 22282240 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 22282240 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 22274048 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'config diff' '{prefix=config diff}'
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'config show' '{prefix=config show}'
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 22511616 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 22413312 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:01:38 np0005591760 ceph-osd[82185]: do_command 'log dump' '{prefix=log dump}'
Jan 22 05:01:38 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:38.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:38 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17454 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:38.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1391214206' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/726008761' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 05:01:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:38.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:38 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17478 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:39 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 22 05:01:39 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615321859' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.416797642 +0000 UTC m=+0.039835089 container create e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:39 np0005591760 systemd[1]: Started libpod-conmon-e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25.scope.
Jan 22 05:01:39 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.475882698 +0000 UTC m=+0.098920154 container init e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.486208785 +0000 UTC m=+0.109246231 container start e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 05:01:39 np0005591760 charming_torvalds[264953]: 167 167
Jan 22 05:01:39 np0005591760 systemd[1]: libpod-e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25.scope: Deactivated successfully.
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.490511932 +0000 UTC m=+0.113549397 container attach e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.492119554 +0000 UTC m=+0.115157000 container died e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.403771552 +0000 UTC m=+0.026809018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27454 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:39 np0005591760 systemd[1]: var-lib-containers-storage-overlay-5b6e4ec611a07b40ea6461d69d1ba03de84fb470f0ffec9d302fe3690e4ce931-merged.mount: Deactivated successfully.
Jan 22 05:01:39 np0005591760 podman[264930]: 2026-01-22 10:01:39.52859254 +0000 UTC m=+0.151629985 container remove e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 05:01:39 np0005591760 systemd[1]: libpod-conmon-e945f9ecb647e6e16a39c33c5c6598026a443dbdb7593c387a26e844eb640c25.scope: Deactivated successfully.
Jan 22 05:01:39 np0005591760 podman[265009]: 2026-01-22 10:01:39.703958383 +0000 UTC m=+0.045399714 container create 09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 05:01:39 np0005591760 systemd[1]: Started libpod-conmon-09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27.scope.
Jan 22 05:01:39 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:01:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6006f47aa834018fa355984797555e4bf43d378116d57f57b094727d91a34183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6006f47aa834018fa355984797555e4bf43d378116d57f57b094727d91a34183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6006f47aa834018fa355984797555e4bf43d378116d57f57b094727d91a34183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6006f47aa834018fa355984797555e4bf43d378116d57f57b094727d91a34183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:39 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6006f47aa834018fa355984797555e4bf43d378116d57f57b094727d91a34183/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:39 np0005591760 podman[265009]: 2026-01-22 10:01:39.771209032 +0000 UTC m=+0.112650363 container init 09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_clarke, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 05:01:39 np0005591760 podman[265009]: 2026-01-22 10:01:39.779428556 +0000 UTC m=+0.120869877 container start 09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:01:39 np0005591760 podman[265009]: 2026-01-22 10:01:39.780715854 +0000 UTC m=+0.122157174 container attach 09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_clarke, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 05:01:39 np0005591760 podman[265009]: 2026-01-22 10:01:39.686122513 +0000 UTC m=+0.027563854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27440 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27443 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27490 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:39 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27496 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:40 np0005591760 sharp_clarke[265031]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:01:40 np0005591760 sharp_clarke[265031]: --> All data devices are unavailable
Jan 22 05:01:40 np0005591760 podman[265009]: 2026-01-22 10:01:40.077645081 +0000 UTC m=+0.419086402 container died 09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_clarke, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 05:01:40 np0005591760 systemd[1]: libpod-09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27.scope: Deactivated successfully.
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27461 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6006f47aa834018fa355984797555e4bf43d378116d57f57b094727d91a34183-merged.mount: Deactivated successfully.
Jan 22 05:01:40 np0005591760 podman[265009]: 2026-01-22 10:01:40.122376994 +0000 UTC m=+0.463818316 container remove 09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 05:01:40 np0005591760 systemd[1]: libpod-conmon-09d51d9f3e522f591810b54a8684bde62e4d1bdb65e2090edf2d83bbaa35ca27.scope: Deactivated successfully.
Jan 22 05:01:40 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 22 05:01:40 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633301605' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27514 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27467 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17559 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:40.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27485 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 nova_compute[248045]: 2026-01-22 10:01:40.458 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17589 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27521 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.748047803 +0000 UTC m=+0.040730266 container create 33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:01:40 np0005591760 systemd[1]: Started libpod-conmon-33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989.scope.
Jan 22 05:01:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:01:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:40.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:40 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.815826036 +0000 UTC m=+0.108508499 container init 33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.728710279 +0000 UTC m=+0.021392762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:01:40 np0005591760 silly_edison[265318]: 167 167
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.850513493 +0000 UTC m=+0.143195956 container start 33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 05:01:40 np0005591760 systemd[1]: libpod-33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989.scope: Deactivated successfully.
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.85360272 +0000 UTC m=+0.146285183 container attach 33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.853946579 +0000 UTC m=+0.146629042 container died 33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 05:01:40 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27553 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:40 np0005591760 systemd[1]: var-lib-containers-storage-overlay-0e4e8e9630c2c59ce953b7f651b18b6d842cf8b0db058a54f3e2658431d5c6e8-merged.mount: Deactivated successfully.
Jan 22 05:01:40 np0005591760 podman[265283]: 2026-01-22 10:01:40.901289527 +0000 UTC m=+0.193971990 container remove 33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 05:01:40 np0005591760 systemd[1]: libpod-conmon-33e347322f7190658c93248b260e4605dcdeaaedb7bd41b893d8d2429ce0c989.scope: Deactivated successfully.
Jan 22 05:01:41 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.09184371 +0000 UTC m=+0.042341014 container create d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 22 05:01:41 np0005591760 systemd[1]: Started libpod-conmon-d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852.scope.
Jan 22 05:01:41 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:01:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357e14578a80968b226012e32e10a0855562aa6e1aa2822879cdcd83350fe993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357e14578a80968b226012e32e10a0855562aa6e1aa2822879cdcd83350fe993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357e14578a80968b226012e32e10a0855562aa6e1aa2822879cdcd83350fe993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:41 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357e14578a80968b226012e32e10a0855562aa6e1aa2822879cdcd83350fe993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.145278227 +0000 UTC m=+0.095775551 container init d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_vaughan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.154528736 +0000 UTC m=+0.105026039 container start d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.155758084 +0000 UTC m=+0.106255388 container attach d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.076052384 +0000 UTC m=+0.026549708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225377840' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27557 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325373554' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]: {
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:    "0": [
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:        {
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "devices": [
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "/dev/loop3"
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            ],
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "lv_name": "ceph_lv0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "lv_size": "21470642176",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "name": "ceph_lv0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "tags": {
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.cluster_name": "ceph",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.crush_device_class": "",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.encrypted": "0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.osd_id": "0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.type": "block",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.vdo": "0",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:                "ceph.with_tpm": "0"
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            },
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "type": "block",
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:            "vg_name": "ceph_vg0"
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:        }
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]:    ]
Jan 22 05:01:41 np0005591760 dazzling_vaughan[265392]: }
Jan 22 05:01:41 np0005591760 systemd[1]: libpod-d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852.scope: Deactivated successfully.
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.397103871 +0000 UTC m=+0.347601175 container died d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_vaughan, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 05:01:41 np0005591760 systemd[1]: var-lib-containers-storage-overlay-357e14578a80968b226012e32e10a0855562aa6e1aa2822879cdcd83350fe993-merged.mount: Deactivated successfully.
Jan 22 05:01:41 np0005591760 podman[265363]: 2026-01-22 10:01:41.438011701 +0000 UTC m=+0.388509005 container remove d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:01:41 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 systemd[1]: libpod-conmon-d88fe530d4feb69119540ca0aa2856be5090501aa47b3b4efc9bb6693f50b852.scope: Deactivated successfully.
Jan 22 05:01:41 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27610 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956566074' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203496290' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27602 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 05:01:41 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 05:01:41 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27640 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:42 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27620 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573964159' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.234112254 +0000 UTC m=+0.055185783 container create ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 22 05:01:42 np0005591760 systemd[1]: Started libpod-conmon-ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694.scope.
Jan 22 05:01:42 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.302847241 +0000 UTC m=+0.123920791 container init ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.215638468 +0000 UTC m=+0.036712018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.321932159 +0000 UTC m=+0.143005688 container start ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.323582452 +0000 UTC m=+0.144655992 container attach ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 05:01:42 np0005591760 optimistic_heisenberg[265621]: 167 167
Jan 22 05:01:42 np0005591760 systemd[1]: libpod-ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694.scope: Deactivated successfully.
Jan 22 05:01:42 np0005591760 conmon[265621]: conmon ef7feff75cfaa9dbbfe5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694.scope/container/memory.events
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.330173744 +0000 UTC m=+0.151247273 container died ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 05:01:42 np0005591760 systemd[1]: var-lib-containers-storage-overlay-233ed32d45ccbd756cd3ac672c07141613bae3c2f9eef9b05d94f1e84ff1d3c4-merged.mount: Deactivated successfully.
Jan 22 05:01:42 np0005591760 podman[265594]: 2026-01-22 10:01:42.366568563 +0000 UTC m=+0.187642091 container remove ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 05:01:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:42.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:42 np0005591760 systemd[1]: libpod-conmon-ef7feff75cfaa9dbbfe584b52f88b14cabdfa0808c701cdb42927806c6d29694.scope: Deactivated successfully.
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475357094' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848773747' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 05:01:42 np0005591760 podman[265675]: 2026-01-22 10:01:42.637625509 +0000 UTC m=+0.048575845 container create 3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:01:42 np0005591760 systemd[1]: Started libpod-conmon-3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590.scope.
Jan 22 05:01:42 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:01:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4aca1cd98e7842dd5383e9ea04153227421986fdbeee6fbd2897a7130c26f24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4aca1cd98e7842dd5383e9ea04153227421986fdbeee6fbd2897a7130c26f24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4aca1cd98e7842dd5383e9ea04153227421986fdbeee6fbd2897a7130c26f24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:42 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4aca1cd98e7842dd5383e9ea04153227421986fdbeee6fbd2897a7130c26f24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:01:42 np0005591760 podman[265675]: 2026-01-22 10:01:42.711757226 +0000 UTC m=+0.122707553 container init 3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:01:42 np0005591760 podman[265675]: 2026-01-22 10:01:42.620682842 +0000 UTC m=+0.031633188 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:01:42 np0005591760 podman[265675]: 2026-01-22 10:01:42.722965667 +0000 UTC m=+0.133915994 container start 3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:01:42 np0005591760 podman[265675]: 2026-01-22 10:01:42.7241598 +0000 UTC m=+0.135110126 container attach 3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kalam, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 05:01:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:01:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:42.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:42 np0005591760 nova_compute[248045]: 2026-01-22 10:01:42.925 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 22 05:01:42 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3838704163' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 05:01:43 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17796 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:43 np0005591760 lucid_kalam[265714]: {}
Jan 22 05:01:43 np0005591760 lvm[265836]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:01:43 np0005591760 lvm[265836]: VG ceph_vg0 finished
Jan 22 05:01:43 np0005591760 systemd[1]: libpod-3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590.scope: Deactivated successfully.
Jan 22 05:01:43 np0005591760 podman[265675]: 2026-01-22 10:01:43.352023996 +0000 UTC m=+0.762974322 container died 3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:01:43 np0005591760 lvm[265838]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:01:43 np0005591760 lvm[265838]: VG ceph_vg0 finished
Jan 22 05:01:43 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f4aca1cd98e7842dd5383e9ea04153227421986fdbeee6fbd2897a7130c26f24-merged.mount: Deactivated successfully.
Jan 22 05:01:43 np0005591760 podman[265675]: 2026-01-22 10:01:43.394640128 +0000 UTC m=+0.805590455 container remove 3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 22 05:01:43 np0005591760 systemd[1]: libpod-conmon-3d4dd35ed420b0863eacfaa717d2a29cf801b324121f80ab13acb179b467d590.scope: Deactivated successfully.
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:01:43 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27737 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3698442464' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 22 05:01:43 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492748195' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 05:01:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17835 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17853 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27820 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17871 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27841 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:44 np0005591760 systemd[1]: Starting Hostname Service...
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:01:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:44 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17886 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:44 np0005591760 systemd[1]: Started Hostname Service.
Jan 22 05:01:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:45 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17913 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:45 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27836 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:45 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17925 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:45 np0005591760 nova_compute[248045]: 2026-01-22 10:01:45.460 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 22 05:01:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076798598' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 05:01:45 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27892 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:45 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17937 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:45 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 22 05:01:45 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475981909' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:46 np0005591760 podman[266316]: 2026-01-22 10:01:46.08748794 +0000 UTC m=+0.078137443 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 22 05:01:46 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27890 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 22 05:01:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540665183' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17970 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:46.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 05:01:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 05:01:46 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.17991 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27923 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:01:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:47.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.27941 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:47.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:47.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:01:47.320 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:01:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:01:47.320 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:01:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:01:47.320 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:01:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 22 05:01:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515819734' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28006 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18072 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28018 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:47 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:47 np0005591760 nova_compute[248045]: 2026-01-22 10:01:47.927 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28010 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28022 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:48.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 22 05:01:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943918456' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 05:01:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:01:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:48 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28060 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 22 05:01:48 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025292210' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 05:01:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:48.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:48.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:48.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:48.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28078 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 22 05:01:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449153188' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:01:49
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'default.rgw.log', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'vms']
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28058 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:01:49 np0005591760 podman[266895]: 2026-01-22 10:01:49.478508675 +0000 UTC m=+0.082284245 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18153 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:01:49 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28070 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 22 05:01:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806087225' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 05:01:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 22 05:01:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761281014' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 22 05:01:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:50 np0005591760 nova_compute[248045]: 2026-01-22 10:01:50.462 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:50 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28126 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:50 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18192 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:50 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 22 05:01:50 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1376445272' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 22 05:01:51 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28127 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 05:01:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202478466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 05:01:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 05:01:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202478466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 05:01:51 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28165 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:51 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28163 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 22 05:01:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407669118' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28193 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 22 05:01:52 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054430764' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 22 05:01:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:52.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18288 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28217 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18300 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:52 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:52 np0005591760 nova_compute[248045]: 2026-01-22 10:01:52.929 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:53 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28267 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 22 05:01:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1081412021' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 22 05:01:53 np0005591760 ovs-appctl[268414]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 22 05:01:53 np0005591760 ovs-appctl[268426]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 22 05:01:53 np0005591760 ovs-appctl[268433]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 22 05:01:53 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28268 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:53 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28282 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:53 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18339 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:54 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28294 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:54 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18348 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:01:54 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28295 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:54.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:54 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28310 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:54.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 22 05:01:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342910288' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 22 05:01:54 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28339 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:01:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:01:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:01:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:01:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 22 05:01:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210126280' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28351 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28346 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:55 np0005591760 nova_compute[248045]: 2026-01-22 10:01:55.463 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28366 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28364 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:55 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28396 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 22 05:01:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947881760' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 22 05:01:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28408 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28400 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:56 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 22 05:01:56 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424493924' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 22 05:01:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:01:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28412 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 22 05:01:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105458384' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 22 05:01:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:57.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:57.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:57.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:57.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:57 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18468 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:01:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:01:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:01:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 22 05:01:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842536977' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 22 05:01:57 np0005591760 nova_compute[248045]: 2026-01-22 10:01:57.931 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:01:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 22 05:01:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375553549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 22 05:01:58 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28465 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:01:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:01:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:01:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:01:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:01:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:01:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:01:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:58.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:58.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:58.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:01:58.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:01:58 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18498 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18504 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:01:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 22 05:01:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322744439' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:01:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:01:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 22 05:01:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363694985' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 22 05:02:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:01:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18522 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:02:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:00 np0005591760 nova_compute[248045]: 2026-01-22 10:02:00.464 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18528 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:02:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:00.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 22 05:02:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300542423' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 22 05:02:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18546 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:02:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18552 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:02:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 22 05:02:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77581902' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 22 05:02:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:02.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 22 05:02:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835417434' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 22 05:02:02 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 05:02:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:02:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:02:02 np0005591760 nova_compute[248045]: 2026-01-22 10:02:02.933 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:03 np0005591760 systemd[1]: Starting Time & Date Service...
Jan 22 05:02:03 np0005591760 systemd[1]: Started Time & Date Service.
Jan 22 05:02:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:04.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:05 np0005591760 nova_compute[248045]: 2026-01-22 10:02:05.466 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:06.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:07.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:07.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:07.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:07.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:07] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:02:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:07] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:02:07 np0005591760 nova_compute[248045]: 2026-01-22 10:02:07.934 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:08.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:08.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:08.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:10.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:10 np0005591760 nova_compute[248045]: 2026-01-22 10:02:10.469 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:10.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:12.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:12 np0005591760 nova_compute[248045]: 2026-01-22 10:02:12.935 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:14.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:14.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:15 np0005591760 nova_compute[248045]: 2026-01-22 10:02:15.472 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:16 np0005591760 nova_compute[248045]: 2026-01-22 10:02:16.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:16.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:17 np0005591760 podman[270800]: 2026-01-22 10:02:17.049219549 +0000 UTC m=+0.039591839 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 05:02:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:17.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:17.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:17.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:17.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:17 np0005591760 nova_compute[248045]: 2026-01-22 10:02:17.296 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:17 np0005591760 nova_compute[248045]: 2026-01-22 10:02:17.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:17 np0005591760 nova_compute[248045]: 2026-01-22 10:02:17.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:17 np0005591760 nova_compute[248045]: 2026-01-22 10:02:17.299 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:02:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:17] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:02:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:17] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:02:17 np0005591760 nova_compute[248045]: 2026-01-22 10:02:17.938 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:02:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5986 writes, 26K keys, 5986 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5986 writes, 5986 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1607 writes, 6921 keys, 1607 commit groups, 1.0 writes per commit group, ingest: 11.75 MB, 0.02 MB/s#012Interval WAL: 1607 writes, 1607 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    375.4      0.11              0.07        14    0.008       0      0       0.0       0.0#012  L6      1/0   11.49 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    467.4    396.7      0.42              0.28        13    0.033     67K   7011       0.0       0.0#012 Sum      1/0   11.49 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    370.8    392.3      0.53              0.35        27    0.020     67K   7011       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    398.7    395.5      0.16              0.11         8    0.020     24K   2076       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    467.4    396.7      0.42              0.28        13    0.033     67K   7011       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    379.5      0.11              0.07        13    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 0.5 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d6a5b429b0#2 capacity: 304.00 MB usage: 14.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(919,14.08 MB,4.6315%) FilterBlock(28,196.48 KB,0.0631182%) IndexBlock(28,359.98 KB,0.115641%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 05:02:18 np0005591760 nova_compute[248045]: 2026-01-22 10:02:18.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:18.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:18 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:18.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:18.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:18.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:18.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:18.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:02:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.333 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.333 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.333 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.333 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.334 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:02:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:02:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:02:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:02:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:02:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:02:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209946615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.674 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.882 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.883 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4381MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.883 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.884 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.953 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.954 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:02:19 np0005591760 nova_compute[248045]: 2026-01-22 10:02:19.970 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:02:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:20 np0005591760 podman[270842]: 2026-01-22 10:02:20.084055029 +0000 UTC m=+0.075947694 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 05:02:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:02:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3405492509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:02:20 np0005591760 nova_compute[248045]: 2026-01-22 10:02:20.324 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:02:20 np0005591760 nova_compute[248045]: 2026-01-22 10:02:20.328 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:02:20 np0005591760 nova_compute[248045]: 2026-01-22 10:02:20.345 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:02:20 np0005591760 nova_compute[248045]: 2026-01-22 10:02:20.346 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:02:20 np0005591760 nova_compute[248045]: 2026-01-22 10:02:20.346 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:02:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:20.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:20 np0005591760 nova_compute[248045]: 2026-01-22 10:02:20.474 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:20 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:22 np0005591760 nova_compute[248045]: 2026-01-22 10:02:22.347 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:22 np0005591760 nova_compute[248045]: 2026-01-22 10:02:22.348 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:22 np0005591760 nova_compute[248045]: 2026-01-22 10:02:22.348 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:02:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:22.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:02:22 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:22.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:22 np0005591760 nova_compute[248045]: 2026-01-22 10:02:22.939 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:23 np0005591760 nova_compute[248045]: 2026-01-22 10:02:23.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:02:23 np0005591760 nova_compute[248045]: 2026-01-22 10:02:23.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:02:23 np0005591760 nova_compute[248045]: 2026-01-22 10:02:23.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:02:23 np0005591760 nova_compute[248045]: 2026-01-22 10:02:23.315 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:02:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4100696067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:02:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:24 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:24.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.885520) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076144885569, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2487, "num_deletes": 251, "total_data_size": 4287775, "memory_usage": 4343624, "flush_reason": "Manual Compaction"}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076144894532, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4169127, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24829, "largest_seqno": 27315, "table_properties": {"data_size": 4157544, "index_size": 7117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 29595, "raw_average_key_size": 22, "raw_value_size": 4132499, "raw_average_value_size": 3083, "num_data_blocks": 308, "num_entries": 1340, "num_filter_entries": 1340, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769075949, "oldest_key_time": 1769075949, "file_creation_time": 1769076144, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9037 microseconds, and 5730 cpu microseconds.
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.894562) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4169127 bytes OK
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.894574) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.895029) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.895042) EVENT_LOG_v1 {"time_micros": 1769076144895038, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.895053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4276607, prev total WAL file size 4276607, number of live WAL files 2.
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.895747) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4071KB)], [56(11MB)]
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076144895804, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16220766, "oldest_snapshot_seqno": -1}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 6323 keys, 14119868 bytes, temperature: kUnknown
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076144925195, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14119868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14078219, "index_size": 24771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 161348, "raw_average_key_size": 25, "raw_value_size": 13964903, "raw_average_value_size": 2208, "num_data_blocks": 1005, "num_entries": 6323, "num_filter_entries": 6323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769076144, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.925330) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14119868 bytes
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.925680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 551.3 rd, 479.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 11.5 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 6839, records dropped: 516 output_compression: NoCompression
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.925697) EVENT_LOG_v1 {"time_micros": 1769076144925686, "job": 30, "event": "compaction_finished", "compaction_time_micros": 29424, "compaction_time_cpu_micros": 20484, "output_level": 6, "num_output_files": 1, "total_output_size": 14119868, "num_input_records": 6839, "num_output_records": 6323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076144926226, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076144927686, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.895691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.927709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.927712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.927713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.927714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:02:24 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:02:24.927715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:02:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:25 np0005591760 nova_compute[248045]: 2026-01-22 10:02:25.475 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:26.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:26.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:27.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:27.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:27.087Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:27.087Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:27] "GET /metrics HTTP/1.1" 200 48597 "" "Prometheus/2.51.0"
Jan 22 05:02:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:27] "GET /metrics HTTP/1.1" 200 48597 "" "Prometheus/2.51.0"
Jan 22 05:02:27 np0005591760 nova_compute[248045]: 2026-01-22 10:02:27.941 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:02:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:28.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:02:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:28.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:28.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:28.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:28.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:28.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:02:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:30.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:02:30 np0005591760 nova_compute[248045]: 2026-01-22 10:02:30.478 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:30.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:32.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:32.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:32 np0005591760 nova_compute[248045]: 2026-01-22 10:02:32.942 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:33 np0005591760 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 05:02:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:33 np0005591760 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 05:02:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:34.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:35 np0005591760 nova_compute[248045]: 2026-01-22 10:02:35.479 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:37.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:37.084Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:37.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:37.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:02:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:02:37 np0005591760 nova_compute[248045]: 2026-01-22 10:02:37.944 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:38.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:38.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:38.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:38.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:40.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:40 np0005591760 nova_compute[248045]: 2026-01-22 10:02:40.480 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:40.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:42.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:42 np0005591760 systemd[1]: session-55.scope: Deactivated successfully.
Jan 22 05:02:42 np0005591760 systemd[1]: session-55.scope: Consumed 2min 12.653s CPU time, 857.4M memory peak, read 375.3M from disk, written 44.4M to disk.
Jan 22 05:02:42 np0005591760 systemd-logind[747]: Session 55 logged out. Waiting for processes to exit.
Jan 22 05:02:42 np0005591760 systemd-logind[747]: Removed session 55.
Jan 22 05:02:42 np0005591760 systemd-logind[747]: New session 56 of user zuul.
Jan 22 05:02:42 np0005591760 systemd[1]: Started Session 56 of User zuul.
Jan 22 05:02:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:42 np0005591760 systemd[1]: session-56.scope: Deactivated successfully.
Jan 22 05:02:42 np0005591760 systemd-logind[747]: Session 56 logged out. Waiting for processes to exit.
Jan 22 05:02:42 np0005591760 systemd-logind[747]: Removed session 56.
Jan 22 05:02:42 np0005591760 nova_compute[248045]: 2026-01-22 10:02:42.946 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:42 np0005591760 systemd-logind[747]: New session 57 of user zuul.
Jan 22 05:02:42 np0005591760 systemd[1]: Started Session 57 of User zuul.
Jan 22 05:02:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:43 np0005591760 systemd[1]: session-57.scope: Deactivated successfully.
Jan 22 05:02:43 np0005591760 systemd-logind[747]: Session 57 logged out. Waiting for processes to exit.
Jan 22 05:02:43 np0005591760 systemd-logind[747]: Removed session 57.
Jan 22 05:02:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:44.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.860843282 +0000 UTC m=+0.029175413 container create eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:02:44 np0005591760 systemd[1]: Started libpod-conmon-eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347.scope.
Jan 22 05:02:44 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.908307168 +0000 UTC m=+0.076639309 container init eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.913518197 +0000 UTC m=+0.081850328 container start eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ishizaka, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.914605758 +0000 UTC m=+0.082937889 container attach eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:02:44 np0005591760 elegant_ishizaka[271172]: 167 167
Jan 22 05:02:44 np0005591760 systemd[1]: libpod-eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347.scope: Deactivated successfully.
Jan 22 05:02:44 np0005591760 conmon[271172]: conmon eb637d1c9d0b5b047da7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347.scope/container/memory.events
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.91765539 +0000 UTC m=+0.085987532 container died eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ishizaka, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:44 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:02:44 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6ebbda80bcef697f99ab54057012528f567e3644a772325e1ef295555755252b-merged.mount: Deactivated successfully.
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.940638561 +0000 UTC m=+0.108970692 container remove eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:02:44 np0005591760 podman[271159]: 2026-01-22 10:02:44.849361816 +0000 UTC m=+0.017693966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:02:44 np0005591760 systemd[1]: libpod-conmon-eb637d1c9d0b5b047da7285f7f71a449e1f3b8498dd0946ca1c027d6ce9be347.scope: Deactivated successfully.
Jan 22 05:02:45 np0005591760 podman[271194]: 2026-01-22 10:02:45.060257957 +0000 UTC m=+0.029553124 container create a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 05:02:45 np0005591760 systemd[1]: Started libpod-conmon-a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b.scope.
Jan 22 05:02:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:02:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cdd5fb9d73d0df151ca84e457afd0ec787b5c45671060d65f307db5ee6b682/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cdd5fb9d73d0df151ca84e457afd0ec787b5c45671060d65f307db5ee6b682/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cdd5fb9d73d0df151ca84e457afd0ec787b5c45671060d65f307db5ee6b682/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cdd5fb9d73d0df151ca84e457afd0ec787b5c45671060d65f307db5ee6b682/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:45 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cdd5fb9d73d0df151ca84e457afd0ec787b5c45671060d65f307db5ee6b682/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:45 np0005591760 podman[271194]: 2026-01-22 10:02:45.111088413 +0000 UTC m=+0.080383570 container init a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:02:45 np0005591760 podman[271194]: 2026-01-22 10:02:45.117558227 +0000 UTC m=+0.086853384 container start a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 22 05:02:45 np0005591760 podman[271194]: 2026-01-22 10:02:45.118862026 +0000 UTC m=+0.088157184 container attach a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 05:02:45 np0005591760 podman[271194]: 2026-01-22 10:02:45.047933391 +0000 UTC m=+0.017228568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:02:45 np0005591760 heuristic_shirley[271208]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:02:45 np0005591760 heuristic_shirley[271208]: --> All data devices are unavailable
Jan 22 05:02:45 np0005591760 systemd[1]: libpod-a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b.scope: Deactivated successfully.
Jan 22 05:02:45 np0005591760 podman[271224]: 2026-01-22 10:02:45.423728604 +0000 UTC m=+0.017964185 container died a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:02:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-89cdd5fb9d73d0df151ca84e457afd0ec787b5c45671060d65f307db5ee6b682-merged.mount: Deactivated successfully.
Jan 22 05:02:45 np0005591760 podman[271224]: 2026-01-22 10:02:45.444026701 +0000 UTC m=+0.038262271 container remove a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:02:45 np0005591760 systemd[1]: libpod-conmon-a699e80049520f14d475e1bf389f6a03076aba1d53693a07498e0a476a80811b.scope: Deactivated successfully.
Jan 22 05:02:45 np0005591760 nova_compute[248045]: 2026-01-22 10:02:45.483 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.869290382 +0000 UTC m=+0.028949395 container create 5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:02:45 np0005591760 systemd[1]: Started libpod-conmon-5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9.scope.
Jan 22 05:02:45 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.913062284 +0000 UTC m=+0.072721307 container init 5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.91733313 +0000 UTC m=+0.076992143 container start 5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.918287891 +0000 UTC m=+0.077946903 container attach 5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 05:02:45 np0005591760 upbeat_lamarr[271330]: 167 167
Jan 22 05:02:45 np0005591760 systemd[1]: libpod-5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9.scope: Deactivated successfully.
Jan 22 05:02:45 np0005591760 conmon[271330]: conmon 5387b33744561dae2891 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9.scope/container/memory.events
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.921348003 +0000 UTC m=+0.081007016 container died 5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:02:45 np0005591760 systemd[1]: var-lib-containers-storage-overlay-054b3b41908c335fe9a89858d241272914fd03f7b3837092e3cebdb1c427392f-merged.mount: Deactivated successfully.
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.94311488 +0000 UTC m=+0.102773892 container remove 5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 05:02:45 np0005591760 podman[271317]: 2026-01-22 10:02:45.857679171 +0000 UTC m=+0.017338183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:02:45 np0005591760 systemd[1]: libpod-conmon-5387b33744561dae289160f2e4836879516f2bedc1f6bfa53b5915face8ad9f9.scope: Deactivated successfully.
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.063942013 +0000 UTC m=+0.028495639 container create cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 05:02:46 np0005591760 systemd[1]: Started libpod-conmon-cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7.scope.
Jan 22 05:02:46 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:02:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fd3eaf1533f6240ebff3583788412b4cb43a3323c1f74a4c3c8978d14371e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fd3eaf1533f6240ebff3583788412b4cb43a3323c1f74a4c3c8978d14371e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fd3eaf1533f6240ebff3583788412b4cb43a3323c1f74a4c3c8978d14371e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:46 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fd3eaf1533f6240ebff3583788412b4cb43a3323c1f74a4c3c8978d14371e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.118942976 +0000 UTC m=+0.083496622 container init cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.125143421 +0000 UTC m=+0.089697047 container start cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.126854208 +0000 UTC m=+0.091407833 container attach cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.052522433 +0000 UTC m=+0.017076069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]: {
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:    "0": [
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:        {
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "devices": [
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "/dev/loop3"
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            ],
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "lv_name": "ceph_lv0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "lv_size": "21470642176",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "name": "ceph_lv0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "tags": {
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.cluster_name": "ceph",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.crush_device_class": "",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.encrypted": "0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.osd_id": "0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.type": "block",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.vdo": "0",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:                "ceph.with_tpm": "0"
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            },
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "type": "block",
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:            "vg_name": "ceph_vg0"
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:        }
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]:    ]
Jan 22 05:02:46 np0005591760 interesting_leavitt[271366]: }
Jan 22 05:02:46 np0005591760 systemd[1]: libpod-cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7.scope: Deactivated successfully.
Jan 22 05:02:46 np0005591760 conmon[271366]: conmon cb0e4238f3d691b04c28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7.scope/container/memory.events
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.361571 +0000 UTC m=+0.326124627 container died cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:02:46 np0005591760 systemd[1]: var-lib-containers-storage-overlay-66fd3eaf1533f6240ebff3583788412b4cb43a3323c1f74a4c3c8978d14371e6-merged.mount: Deactivated successfully.
Jan 22 05:02:46 np0005591760 podman[271353]: 2026-01-22 10:02:46.383722132 +0000 UTC m=+0.348275758 container remove cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_leavitt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 05:02:46 np0005591760 systemd[1]: libpod-conmon-cb0e4238f3d691b04c28717efaa4200196747cc722620380d27c935200172aa7.scope: Deactivated successfully.
Jan 22 05:02:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.795067039 +0000 UTC m=+0.027792604 container create 9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 05:02:46 np0005591760 systemd[1]: Started libpod-conmon-9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966.scope.
Jan 22 05:02:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:46.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:46 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.851117932 +0000 UTC m=+0.083843496 container init 9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.855240047 +0000 UTC m=+0.087965611 container start 9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.856404845 +0000 UTC m=+0.089130408 container attach 9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:02:46 np0005591760 nostalgic_kirch[271481]: 167 167
Jan 22 05:02:46 np0005591760 systemd[1]: libpod-9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966.scope: Deactivated successfully.
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.8585869 +0000 UTC m=+0.091312464 container died 9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:02:46 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c56d7d9ebaeb6507c57afc03bf049df9c5bfb576ba774a5422682a4cb72a1df0-merged.mount: Deactivated successfully.
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.878874077 +0000 UTC m=+0.111599641 container remove 9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 22 05:02:46 np0005591760 podman[271468]: 2026-01-22 10:02:46.784402354 +0000 UTC m=+0.017127917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:02:46 np0005591760 systemd[1]: libpod-conmon-9bdd646f59d5931994acd43ef996321dad2b42d4ebe0d574231792922b045966.scope: Deactivated successfully.
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:47.003453387 +0000 UTC m=+0.028619743 container create 6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 05:02:47 np0005591760 systemd[1]: Started libpod-conmon-6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78.scope.
Jan 22 05:02:47 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:02:47 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95173bf06e877d809c4bba64d308c1ab2f5cbb9f2f9f6bb3b2127583d48fa60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:47 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95173bf06e877d809c4bba64d308c1ab2f5cbb9f2f9f6bb3b2127583d48fa60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:47 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95173bf06e877d809c4bba64d308c1ab2f5cbb9f2f9f6bb3b2127583d48fa60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:47 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95173bf06e877d809c4bba64d308c1ab2f5cbb9f2f9f6bb3b2127583d48fa60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:47.056637263 +0000 UTC m=+0.081803629 container init 6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:47.062573709 +0000 UTC m=+0.087740065 container start 6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_tu, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:47.065796098 +0000 UTC m=+0.090962474 container attach 6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:46.991988933 +0000 UTC m=+0.017155310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:47.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:47.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:47.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:02:47.321 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:02:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:02:47.321 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:02:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:02:47.323 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:02:47 np0005591760 interesting_tu[271517]: {}
Jan 22 05:02:47 np0005591760 lvm[271602]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:02:47 np0005591760 lvm[271602]: VG ceph_vg0 finished
Jan 22 05:02:47 np0005591760 systemd[1]: libpod-6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78.scope: Deactivated successfully.
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:47.59428898 +0000 UTC m=+0.619455346 container died 6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:02:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:02:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:02:47 np0005591760 podman[271591]: 2026-01-22 10:02:47.60148352 +0000 UTC m=+0.059072063 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 05:02:47 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c95173bf06e877d809c4bba64d308c1ab2f5cbb9f2f9f6bb3b2127583d48fa60-merged.mount: Deactivated successfully.
Jan 22 05:02:47 np0005591760 podman[271504]: 2026-01-22 10:02:47.620589718 +0000 UTC m=+0.645756074 container remove 6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:02:47 np0005591760 systemd[1]: libpod-conmon-6e741904149acdcb5a95672d4d445c7720739189a10b2427921959191d8d9f78.scope: Deactivated successfully.
Jan 22 05:02:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:02:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:47 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:02:47 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:47 np0005591760 nova_compute[248045]: 2026-01-22 10:02:47.948 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:02:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:48 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:48 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:02:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 22 05:02:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:48.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:48.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:48.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:48.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:48.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:02:49
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.nfs', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:02:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:02:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:50 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:50 np0005591760 nova_compute[248045]: 2026-01-22 10:02:50.486 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:50.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:51 np0005591760 podman[271651]: 2026-01-22 10:02:51.068392866 +0000 UTC m=+0.061119243 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 05:02:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:02:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:52.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:02:52 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:02:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:52 np0005591760 nova_compute[248045]: 2026-01-22 10:02:52.949 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:54 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:02:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:55 np0005591760 nova_compute[248045]: 2026-01-22 10:02:55.487 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:56 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:02:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:02:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:57.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:57.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:57.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:02:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:02:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:02:57 np0005591760 nova_compute[248045]: 2026-01-22 10:02:57.952 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:02:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:02:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:58 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:02:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:02:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:02:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:02:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:02:58.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:02:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:58.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:58.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:58.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:02:58.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:02:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:03:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:00 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:00 np0005591760 nova_compute[248045]: 2026-01-22 10:03:00.489 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:00.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:02 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:02.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:02 np0005591760 nova_compute[248045]: 2026-01-22 10:03:02.955 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:04 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:04.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:05 np0005591760 nova_compute[248045]: 2026-01-22 10:03:05.492 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:03:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:03:06 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:06.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:07.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:07 np0005591760 nova_compute[248045]: 2026-01-22 10:03:07.957 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:08 np0005591760 nova_compute[248045]: 2026-01-22 10:03:08.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:08 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:08.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:08.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:08.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:03:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:10.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:03:10 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:10 np0005591760 nova_compute[248045]: 2026-01-22 10:03:10.494 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:10.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:12 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:12.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:12 np0005591760 nova_compute[248045]: 2026-01-22 10:03:12.960 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:13.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:13.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:13.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:14 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:14.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:14.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:15 np0005591760 nova_compute[248045]: 2026-01-22 10:03:15.495 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:16 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:16.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:17 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:17.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:17.087Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:17.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:17.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:17 np0005591760 nova_compute[248045]: 2026-01-22 10:03:17.962 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:18 np0005591760 podman[271752]: 2026-01-22 10:03:18.047099212 +0000 UTC m=+0.040391256 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 05:03:18 np0005591760 nova_compute[248045]: 2026-01-22 10:03:18.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:18 np0005591760 nova_compute[248045]: 2026-01-22 10:03:18.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:18 np0005591760 nova_compute[248045]: 2026-01-22 10:03:18.315 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:03:18 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:18.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:18.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:18.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:18.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.325 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.325 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.325 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.325 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.326 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:03:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:03:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:03:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:03:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:03:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:03:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:03:19 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:03:19 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129588978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.667 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.875 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.876 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4528MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.876 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:03:19 np0005591760 nova_compute[248045]: 2026-01-22 10:03:19.876 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.135 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.135 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.294 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing inventories for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.318 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating ProviderTree inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.318 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.340 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing aggregate associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.365 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing trait associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, traits: HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,HW_CPU_X86_AVX512VAES,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.384 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:03:20 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:20.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.496 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:03:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305070585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.726 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.730 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.743 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.744 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.744 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.745 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:20 np0005591760 nova_compute[248045]: 2026-01-22 10:03:20.745 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 05:03:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:22 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:22 np0005591760 podman[271816]: 2026-01-22 10:03:22.065363933 +0000 UTC m=+0.056974655 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 05:03:22 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:22.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:22 np0005591760 nova_compute[248045]: 2026-01-22 10:03:22.965 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:23.562Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.315 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.315 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.329 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.329 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.329 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.329 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.330 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.330 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 05:03:24 np0005591760 nova_compute[248045]: 2026-01-22 10:03:24.342 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 05:03:24 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:25 np0005591760 nova_compute[248045]: 2026-01-22 10:03:25.498 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:26 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:26.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:26.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:26 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:26 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:26 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:27.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:27.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:27.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:27.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:27 np0005591760 nova_compute[248045]: 2026-01-22 10:03:27.967 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:28 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:28.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:28.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:28.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:28.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:28.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:28.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:03:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 4083 syncs, 3.10 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2892 writes, 9813 keys, 2892 commit groups, 1.0 writes per commit group, ingest: 11.69 MB, 0.02 MB/s#012Interval WAL: 2892 writes, 1299 syncs, 2.23 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 05:03:30 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:30.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:30 np0005591760 nova_compute[248045]: 2026-01-22 10:03:30.500 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:32 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:32 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:32.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:32 np0005591760 nova_compute[248045]: 2026-01-22 10:03:32.968 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:33.563Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:33.632Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:33.634Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:33.634Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:34 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:34.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:35 np0005591760 nova_compute[248045]: 2026-01-22 10:03:35.501 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:36 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:36.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:36.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:36 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:36 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:36 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:37.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:37.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:37.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:37.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:37 np0005591760 nova_compute[248045]: 2026-01-22 10:03:37.970 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:38 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:38.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:38.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:38.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:38.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:38.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:38.933Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:40 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:40 np0005591760 nova_compute[248045]: 2026-01-22 10:03:40.504 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:41 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:41 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:41 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:42 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:42.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:42 np0005591760 nova_compute[248045]: 2026-01-22 10:03:42.971 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:43.563Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:43.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:43.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:43.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:44 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:44.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:45 np0005591760 nova_compute[248045]: 2026-01-22 10:03:45.505 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:46 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:03:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:46.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:47.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:47.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:47.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:47.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:03:47.322 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:03:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:03:47.322 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:03:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:03:47.323 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:03:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:03:47 np0005591760 nova_compute[248045]: 2026-01-22 10:03:47.973 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:48 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:03:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:48.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:48 np0005591760 podman[271993]: 2026-01-22 10:03:48.52914724 +0000 UTC m=+0.043200897 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 05:03:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.856853519 +0000 UTC m=+0.041371756 container create ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 05:03:48 np0005591760 systemd[1]: Started libpod-conmon-ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6.scope.
Jan 22 05:03:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:48.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:48 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:48.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.922364777 +0000 UTC m=+0.106883004 container init ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.928275195 +0000 UTC m=+0.112793412 container start ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.929575678 +0000 UTC m=+0.114093905 container attach ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:03:48 np0005591760 stoic_einstein[272079]: 167 167
Jan 22 05:03:48 np0005591760 systemd[1]: libpod-ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6.scope: Deactivated successfully.
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.93243492 +0000 UTC m=+0.116953138 container died ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.840903704 +0000 UTC m=+0.025421951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:48.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:48.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:48.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:48 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fe7762cb13856c834b1398daeab55083dea71d6c7301213aedabe80e90eb95a5-merged.mount: Deactivated successfully.
Jan 22 05:03:48 np0005591760 podman[272066]: 2026-01-22 10:03:48.955758634 +0000 UTC m=+0.140276861 container remove ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:03:48 np0005591760 systemd[1]: libpod-conmon-ebcee89c816cef8f839862a29547f9413d6cfa89e13b134381aa8b6fab124de6.scope: Deactivated successfully.
Jan 22 05:03:49 np0005591760 podman[272101]: 2026-01-22 10:03:49.080050312 +0000 UTC m=+0.029676587 container create 2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nobel, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 22 05:03:49 np0005591760 systemd[1]: Started libpod-conmon-2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126.scope.
Jan 22 05:03:49 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866048bf19c89f1e92519106c8ab2c1c2c4b196ed47c8bf5820ba790ef5806c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866048bf19c89f1e92519106c8ab2c1c2c4b196ed47c8bf5820ba790ef5806c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866048bf19c89f1e92519106c8ab2c1c2c4b196ed47c8bf5820ba790ef5806c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:49 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866048bf19c89f1e92519106c8ab2c1c2c4b196ed47c8bf5820ba790ef5806c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:49 np0005591760 podman[272101]: 2026-01-22 10:03:49.136749488 +0000 UTC m=+0.086375772 container init 2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nobel, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 05:03:49 np0005591760 podman[272101]: 2026-01-22 10:03:49.14241212 +0000 UTC m=+0.092038394 container start 2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nobel, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:03:49 np0005591760 podman[272101]: 2026-01-22 10:03:49.143924271 +0000 UTC m=+0.093550566 container attach 2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nobel, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:03:49 np0005591760 podman[272101]: 2026-01-22 10:03:49.068838856 +0000 UTC m=+0.018465150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:03:49
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.nfs']
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]: [
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:    {
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "available": false,
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "being_replaced": false,
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "ceph_device_lvm": false,
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "lsm_data": {},
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "lvs": [],
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "path": "/dev/sr0",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "rejected_reasons": [
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "Has a FileSystem",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "Insufficient space (<5GB)"
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        ],
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        "sys_api": {
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "actuators": null,
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "device_nodes": [
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:                "sr0"
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            ],
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "devname": "sr0",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "human_readable_size": "474.00 KB",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "id_bus": "ata",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "model": "QEMU DVD-ROM",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "nr_requests": "64",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "parent": "/dev/sr0",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "partitions": {},
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "path": "/dev/sr0",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "removable": "1",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "rev": "2.5+",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "ro": "0",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "rotational": "1",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "sas_address": "",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "sas_device_handle": "",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "scheduler_mode": "mq-deadline",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "sectors": 0,
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "sectorsize": "2048",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "size": 485376.0,
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "support_discard": "2048",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "type": "disk",
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:            "vendor": "QEMU"
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:        }
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]:    }
Jan 22 05:03:49 np0005591760 hardcore_nobel[272114]: ]
Jan 22 05:03:49 np0005591760 systemd[1]: libpod-2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126.scope: Deactivated successfully.
Jan 22 05:03:49 np0005591760 podman[273300]: 2026-01-22 10:03:49.718676949 +0000 UTC m=+0.019583740 container died 2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 05:03:49 np0005591760 systemd[1]: var-lib-containers-storage-overlay-866048bf19c89f1e92519106c8ab2c1c2c4b196ed47c8bf5820ba790ef5806c5-merged.mount: Deactivated successfully.
Jan 22 05:03:49 np0005591760 podman[273300]: 2026-01-22 10:03:49.741432982 +0000 UTC m=+0.042339772 container remove 2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nobel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 05:03:49 np0005591760 systemd[1]: libpod-conmon-2091dd714f19d9d9404d00ad98bbd08b392a53b49bfafe22ec2bfef61c21c126.scope: Deactivated successfully.
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:49 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.209302936 +0000 UTC m=+0.028752564 container create bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 05:03:50 np0005591760 systemd[1]: Started libpod-conmon-bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed.scope.
Jan 22 05:03:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.262362897 +0000 UTC m=+0.081812545 container init bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.267919098 +0000 UTC m=+0.087368726 container start bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.269046935 +0000 UTC m=+0.088496573 container attach bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 05:03:50 np0005591760 practical_austin[273406]: 167 167
Jan 22 05:03:50 np0005591760 systemd[1]: libpod-bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed.scope: Deactivated successfully.
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.271581155 +0000 UTC m=+0.091030793 container died bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 05:03:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay-851b120412711b6c05754868a1b52b33cc38f2c0667552ff5190bd437c098579-merged.mount: Deactivated successfully.
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.290508035 +0000 UTC m=+0.109957663 container remove bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_austin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:03:50 np0005591760 podman[273393]: 2026-01-22 10:03:50.198176068 +0000 UTC m=+0.017625706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:50 np0005591760 systemd[1]: libpod-conmon-bda962527217e59aac6441fec8edaa9a69b84ba504717045fd545972a69927ed.scope: Deactivated successfully.
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.411957843 +0000 UTC m=+0.028037545 container create 13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 05:03:50 np0005591760 systemd[1]: Started libpod-conmon-13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9.scope.
Jan 22 05:03:50 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0217e7d979fe025185360e85f1b04758c01f3fa4cae5186713713ba7c4acbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0217e7d979fe025185360e85f1b04758c01f3fa4cae5186713713ba7c4acbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0217e7d979fe025185360e85f1b04758c01f3fa4cae5186713713ba7c4acbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0217e7d979fe025185360e85f1b04758c01f3fa4cae5186713713ba7c4acbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:50 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0217e7d979fe025185360e85f1b04758c01f3fa4cae5186713713ba7c4acbf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.477009203 +0000 UTC m=+0.093088905 container init 13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.481824134 +0000 UTC m=+0.097903837 container start 13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.482879795 +0000 UTC m=+0.098959498 container attach 13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.401173973 +0000 UTC m=+0.017253695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:50 np0005591760 nova_compute[248045]: 2026-01-22 10:03:50.507 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:50 np0005591760 gifted_curie[273440]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:03:50 np0005591760 gifted_curie[273440]: --> All data devices are unavailable
Jan 22 05:03:50 np0005591760 systemd[1]: libpod-13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9.scope: Deactivated successfully.
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.74437655 +0000 UTC m=+0.360456252 container died 13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 05:03:50 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fb0217e7d979fe025185360e85f1b04758c01f3fa4cae5186713713ba7c4acbf-merged.mount: Deactivated successfully.
Jan 22 05:03:50 np0005591760 podman[273427]: 2026-01-22 10:03:50.766376026 +0000 UTC m=+0.382455728 container remove 13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:03:50 np0005591760 systemd[1]: libpod-conmon-13ba00a99848dacf2bd0cb5ede767122349e2790ec8ed387cefa9dc67c6ad0f9.scope: Deactivated successfully.
Jan 22 05:03:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.196764223 +0000 UTC m=+0.031403243 container create 459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:03:51 np0005591760 systemd[1]: Started libpod-conmon-459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987.scope.
Jan 22 05:03:51 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.249326786 +0000 UTC m=+0.083965807 container init 459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.253835971 +0000 UTC m=+0.088474992 container start 459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.254957817 +0000 UTC m=+0.089596858 container attach 459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Jan 22 05:03:51 np0005591760 gifted_kirch[273561]: 167 167
Jan 22 05:03:51 np0005591760 systemd[1]: libpod-459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987.scope: Deactivated successfully.
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.257205527 +0000 UTC m=+0.091844538 container died 459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_kirch, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:03:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4890acad29bc703ee4e86817c634ab2e1992daa6cbafe256d3319cbbedab8ce2-merged.mount: Deactivated successfully.
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.275554607 +0000 UTC m=+0.110193629 container remove 459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_kirch, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 05:03:51 np0005591760 podman[273548]: 2026-01-22 10:03:51.185154365 +0000 UTC m=+0.019793406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:51 np0005591760 systemd[1]: libpod-conmon-459e8428e877bf9ae552bcddf9bff8c9aac32f2b0eb0d6ed11699ce58479d987.scope: Deactivated successfully.
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.398217493 +0000 UTC m=+0.029586648 container create 53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 05:03:51 np0005591760 systemd[1]: Started libpod-conmon-53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a.scope.
Jan 22 05:03:51 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388d646782e7b5837375f53852cce86b868e76808a6d906aecb120e85e9ed52a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388d646782e7b5837375f53852cce86b868e76808a6d906aecb120e85e9ed52a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388d646782e7b5837375f53852cce86b868e76808a6d906aecb120e85e9ed52a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:51 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388d646782e7b5837375f53852cce86b868e76808a6d906aecb120e85e9ed52a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.450227906 +0000 UTC m=+0.081597070 container init 53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.457844422 +0000 UTC m=+0.089213576 container start 53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.458933015 +0000 UTC m=+0.090302169 container attach 53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.386882323 +0000 UTC m=+0.018251497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]: {
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:    "0": [
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:        {
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "devices": [
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "/dev/loop3"
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            ],
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "lv_name": "ceph_lv0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "lv_size": "21470642176",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "name": "ceph_lv0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "tags": {
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.cluster_name": "ceph",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.crush_device_class": "",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.encrypted": "0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.osd_id": "0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.type": "block",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.vdo": "0",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:                "ceph.with_tpm": "0"
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            },
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "type": "block",
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:            "vg_name": "ceph_vg0"
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:        }
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]:    ]
Jan 22 05:03:51 np0005591760 compassionate_payne[273595]: }
Jan 22 05:03:51 np0005591760 systemd[1]: libpod-53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a.scope: Deactivated successfully.
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.69938219 +0000 UTC m=+0.330751344 container died 53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:03:51 np0005591760 systemd[1]: var-lib-containers-storage-overlay-388d646782e7b5837375f53852cce86b868e76808a6d906aecb120e85e9ed52a-merged.mount: Deactivated successfully.
Jan 22 05:03:51 np0005591760 podman[273582]: 2026-01-22 10:03:51.725618166 +0000 UTC m=+0.356987320 container remove 53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:03:51 np0005591760 systemd[1]: libpod-conmon-53c620b3a0dcbbc99d7fd841afe4a57dcfa0b21f6b88c29b5bfd3db5cd34f05a.scope: Deactivated successfully.
Jan 22 05:03:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:03:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.152910263 +0000 UTC m=+0.028264854 container create 5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_galois, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 05:03:52 np0005591760 systemd[1]: Started libpod-conmon-5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad.scope.
Jan 22 05:03:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.196217619 +0000 UTC m=+0.071572231 container init 5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_galois, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.200763573 +0000 UTC m=+0.076118165 container start 5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_galois, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.202994592 +0000 UTC m=+0.078349202 container attach 5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_galois, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:03:52 np0005591760 nostalgic_galois[273711]: 167 167
Jan 22 05:03:52 np0005591760 systemd[1]: libpod-5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad.scope: Deactivated successfully.
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.204515391 +0000 UTC m=+0.079869972 container died 5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:03:52 np0005591760 systemd[1]: var-lib-containers-storage-overlay-867c1a538fb5748d506201059764b108c2f0b32cada8178f75457222c957ae58-merged.mount: Deactivated successfully.
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.227560818 +0000 UTC m=+0.102915409 container remove 5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:03:52 np0005591760 podman[273697]: 2026-01-22 10:03:52.141121598 +0000 UTC m=+0.016476208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:52 np0005591760 systemd[1]: libpod-conmon-5c0d4ce162284c7dd7583dc712bd565245685c7d6bfdc0a6ac92a6d7c555f3ad.scope: Deactivated successfully.
Jan 22 05:03:52 np0005591760 podman[273708]: 2026-01-22 10:03:52.253263269 +0000 UTC m=+0.075933476 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.351678649 +0000 UTC m=+0.030190195 container create 7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_saha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:03:52 np0005591760 systemd[1]: Started libpod-conmon-7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91.scope.
Jan 22 05:03:52 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:03:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83ce064552db810ee219795ff03eaaa58e5bdf9cff2d17c5fe7628c41c9508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83ce064552db810ee219795ff03eaaa58e5bdf9cff2d17c5fe7628c41c9508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83ce064552db810ee219795ff03eaaa58e5bdf9cff2d17c5fe7628c41c9508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:52 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc83ce064552db810ee219795ff03eaaa58e5bdf9cff2d17c5fe7628c41c9508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.408999178 +0000 UTC m=+0.087510723 container init 7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.413791797 +0000 UTC m=+0.092303343 container start 7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.415041965 +0000 UTC m=+0.093553511 container attach 7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.340813887 +0000 UTC m=+0.019325462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:03:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:03:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:03:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:52 np0005591760 sleepy_saha[273766]: {}
Jan 22 05:03:52 np0005591760 lvm[273843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:03:52 np0005591760 lvm[273843]: VG ceph_vg0 finished
Jan 22 05:03:52 np0005591760 systemd[1]: libpod-7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91.scope: Deactivated successfully.
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.934502821 +0000 UTC m=+0.613014377 container died 7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 05:03:52 np0005591760 systemd[1]: var-lib-containers-storage-overlay-dc83ce064552db810ee219795ff03eaaa58e5bdf9cff2d17c5fe7628c41c9508-merged.mount: Deactivated successfully.
Jan 22 05:03:52 np0005591760 podman[273753]: 2026-01-22 10:03:52.960359973 +0000 UTC m=+0.638871529 container remove 7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:03:52 np0005591760 systemd[1]: libpod-conmon-7b0b60cf635ab753d2bc6b92c625782af95999656d885366c5d74b42af933e91.scope: Deactivated successfully.
Jan 22 05:03:52 np0005591760 nova_compute[248045]: 2026-01-22 10:03:52.974 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:52 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:03:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:03:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:53.564Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:53.579Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:53.579Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:53.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:03:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:03:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:03:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:03:55 np0005591760 nova_compute[248045]: 2026-01-22 10:03:55.508 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 22 05:03:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:56.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:03:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:57.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:57.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:57.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:57.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:03:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:03:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:03:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:03:57 np0005591760 nova_compute[248045]: 2026-01-22 10:03:57.978 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:03:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:03:58.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:03:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:03:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:03:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:03:58.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:03:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:58.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:58.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:58.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:03:58.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:03:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:04:00 np0005591760 nova_compute[248045]: 2026-01-22 10:04:00.512 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:00.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:04:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:02.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:02.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:02 np0005591760 nova_compute[248045]: 2026-01-22 10:04:02.980 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:03.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:03.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:03.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:03.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:04.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:04.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:05 np0005591760 nova_compute[248045]: 2026-01-22 10:04:05.516 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:04:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:06.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:06.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:07.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:07.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:07.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:07.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:07] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:04:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:07] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:04:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:07 np0005591760 nova_compute[248045]: 2026-01-22 10:04:07.981 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:08.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:08.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:08.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:08.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:08.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:08.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:10 np0005591760 nova_compute[248045]: 2026-01-22 10:04:10.517 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:04:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:12.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:12 np0005591760 nova_compute[248045]: 2026-01-22 10:04:12.982 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:13.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:13.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:13.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:13.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:14.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:14.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:15 np0005591760 nova_compute[248045]: 2026-01-22 10:04:15.519 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:04:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:16 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:17.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:17.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:17.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:17] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:04:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:17] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:04:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:17 np0005591760 nova_compute[248045]: 2026-01-22 10:04:17.984 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:18 np0005591760 nova_compute[248045]: 2026-01-22 10:04:18.312 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:18 np0005591760 nova_compute[248045]: 2026-01-22 10:04:18.313 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:18 np0005591760 nova_compute[248045]: 2026-01-22 10:04:18.313 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:04:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:04:18 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4098620155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:04:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:18.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:18.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:18.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:18.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:19 np0005591760 podman[273956]: 2026-01-22 10:04:19.049379286 +0000 UTC m=+0.040490896 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:04:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.312 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.312 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.333 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.333 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.334 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.334 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.334 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.521 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:04:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:20.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.677 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.875 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.876 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4513MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.876 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.877 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:04:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.964 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.964 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:04:20 np0005591760 nova_compute[248045]: 2026-01-22 10:04:20.996 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:04:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:04:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038362481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:04:21 np0005591760 nova_compute[248045]: 2026-01-22 10:04:21.337 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:04:21 np0005591760 nova_compute[248045]: 2026-01-22 10:04:21.341 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:04:21 np0005591760 nova_compute[248045]: 2026-01-22 10:04:21.355 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:04:21 np0005591760 nova_compute[248045]: 2026-01-22 10:04:21.357 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:04:21 np0005591760 nova_compute[248045]: 2026-01-22 10:04:21.357 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:04:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:04:22 np0005591760 nova_compute[248045]: 2026-01-22 10:04:22.356 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:22.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:22 np0005591760 nova_compute[248045]: 2026-01-22 10:04:22.985 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:23 np0005591760 podman[274020]: 2026-01-22 10:04:23.059125689 +0000 UTC m=+0.051877062 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:04:23 np0005591760 nova_compute[248045]: 2026-01-22 10:04:23.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:23.568Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:23.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:23.584Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:23.584Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:04:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:24.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:04:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:24.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:04:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.317 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.318 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.318 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:04:25 np0005591760 nova_compute[248045]: 2026-01-22 10:04:25.525 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:04:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:26.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:26.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:27.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:27.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:27.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:27.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:04:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:04:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:04:27 np0005591760 nova_compute[248045]: 2026-01-22 10:04:27.988 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:28.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:28.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:28.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:28.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:28.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:04:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:30 np0005591760 nova_compute[248045]: 2026-01-22 10:04:30.526 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:30.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:04:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:32.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:32 np0005591760 nova_compute[248045]: 2026-01-22 10:04:32.990 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:33.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:33.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:33.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:33.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:34.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:35 np0005591760 nova_compute[248045]: 2026-01-22 10:04:35.528 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:04:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:36.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:36.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:37.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:37.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:37.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:37.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:37] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:04:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:37] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:04:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:37 np0005591760 nova_compute[248045]: 2026-01-22 10:04:37.992 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.556963) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076278556987, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1425, "num_deletes": 256, "total_data_size": 2661565, "memory_usage": 2698384, "flush_reason": "Manual Compaction"}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076278562434, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2586186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27316, "largest_seqno": 28740, "table_properties": {"data_size": 2579630, "index_size": 3691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13549, "raw_average_key_size": 19, "raw_value_size": 2566380, "raw_average_value_size": 3655, "num_data_blocks": 163, "num_entries": 702, "num_filter_entries": 702, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769076145, "oldest_key_time": 1769076145, "file_creation_time": 1769076278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 5494 microseconds, and 4171 cpu microseconds.
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:04:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:38.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.562458) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2586186 bytes OK
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.562469) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.563952) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.563964) EVENT_LOG_v1 {"time_micros": 1769076278563961, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.563987) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2655450, prev total WAL file size 2655450, number of live WAL files 2.
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.564581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2525KB)], [59(13MB)]
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076278564629, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 16706054, "oldest_snapshot_seqno": -1}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6497 keys, 16565450 bytes, temperature: kUnknown
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076278603304, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16565450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16520136, "index_size": 27988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 166072, "raw_average_key_size": 25, "raw_value_size": 16401224, "raw_average_value_size": 2524, "num_data_blocks": 1141, "num_entries": 6497, "num_filter_entries": 6497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769076278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.603538) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16565450 bytes
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.603995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 430.4 rd, 426.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 13.5 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 7025, records dropped: 528 output_compression: NoCompression
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.604009) EVENT_LOG_v1 {"time_micros": 1769076278604003, "job": 32, "event": "compaction_finished", "compaction_time_micros": 38814, "compaction_time_cpu_micros": 24209, "output_level": 6, "num_output_files": 1, "total_output_size": 16565450, "num_input_records": 7025, "num_output_records": 6497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076278604716, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076278606787, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.564511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.606911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.606915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.606917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.606919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:38 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:38.606920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:38.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:38.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:38.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:38.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:04:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:40 np0005591760 nova_compute[248045]: 2026-01-22 10:04:40.530 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:40.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:04:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:04:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s
Jan 22 05:04:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:42.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:42 np0005591760 nova_compute[248045]: 2026-01-22 10:04:42.994 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:43.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:43.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:43.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 7 op/s
Jan 22 05:04:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:44.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:45 np0005591760 nova_compute[248045]: 2026-01-22 10:04:45.531 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 05:04:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:46.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:47.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:47.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:47.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:47.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:04:47.323 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:04:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:04:47.323 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:04:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:04:47.324 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:04:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:47] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:04:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:47] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:04:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 22 05:04:47 np0005591760 nova_compute[248045]: 2026-01-22 10:04:47.996 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:48.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:48.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:48.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:48.933Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:48.933Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:04:49
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control', '.nfs', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'vms']
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:04:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 22 05:04:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:50 np0005591760 podman[274094]: 2026-01-22 10:04:50.052565897 +0000 UTC m=+0.044412135 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 05:04:50 np0005591760 nova_compute[248045]: 2026-01-22 10:04:50.534 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:50.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 05:04:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:52.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:52.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:52 np0005591760 nova_compute[248045]: 2026-01-22 10:04:52.997 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:53 np0005591760 podman[274139]: 2026-01-22 10:04:53.357535483 +0000 UTC m=+0.075833460 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:53.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 05:04:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:53.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:53.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:53.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:04:53 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 05:04:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:04:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:54.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 05:04:54 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 05:04:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:04:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:04:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:04:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.377767905 +0000 UTC m=+0.026995781 container create 0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 05:04:55 np0005591760 systemd[1]: Started libpod-conmon-0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5.scope.
Jan 22 05:04:55 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.427687374 +0000 UTC m=+0.076915260 container init 0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.432939869 +0000 UTC m=+0.082167735 container start 0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.434226827 +0000 UTC m=+0.083454693 container attach 0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 05:04:55 np0005591760 nervous_lamport[274406]: 167 167
Jan 22 05:04:55 np0005591760 systemd[1]: libpod-0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5.scope: Deactivated successfully.
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.436931818 +0000 UTC m=+0.086159724 container died 0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 05:04:55 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d61055aacee11da5792b9261ef407774ce1a2dc0da6baec33a6353348a64622e-merged.mount: Deactivated successfully.
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.45576828 +0000 UTC m=+0.104996135 container remove 0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lamport, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:04:55 np0005591760 podman[274392]: 2026-01-22 10:04:55.36728165 +0000 UTC m=+0.016509536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:04:55 np0005591760 systemd[1]: libpod-conmon-0c22e314fde3d89241cfbada2822ade3ffa33eca0c9e72d7ce74ac1ccc490de5.scope: Deactivated successfully.
Jan 22 05:04:55 np0005591760 nova_compute[248045]: 2026-01-22 10:04:55.535 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.573674731 +0000 UTC m=+0.027543845 container create dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 05:04:55 np0005591760 systemd[1]: Started libpod-conmon-dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00.scope.
Jan 22 05:04:55 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:04:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e941113dfb67020a39d870ad4272693cc1f09863b1f3fb828ea37ab86fc420/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e941113dfb67020a39d870ad4272693cc1f09863b1f3fb828ea37ab86fc420/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e941113dfb67020a39d870ad4272693cc1f09863b1f3fb828ea37ab86fc420/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e941113dfb67020a39d870ad4272693cc1f09863b1f3fb828ea37ab86fc420/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:55 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e941113dfb67020a39d870ad4272693cc1f09863b1f3fb828ea37ab86fc420/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.640641147 +0000 UTC m=+0.094510271 container init dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.645049602 +0000 UTC m=+0.098918716 container start dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.646221061 +0000 UTC m=+0.100090175 container attach dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.563109497 +0000 UTC m=+0.016978621 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:04:55 np0005591760 stupefied_wing[274441]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:04:55 np0005591760 stupefied_wing[274441]: --> All data devices are unavailable
Jan 22 05:04:55 np0005591760 systemd[1]: libpod-dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00.scope: Deactivated successfully.
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.912881477 +0000 UTC m=+0.366750591 container died dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:04:55 np0005591760 systemd[1]: var-lib-containers-storage-overlay-26e941113dfb67020a39d870ad4272693cc1f09863b1f3fb828ea37ab86fc420-merged.mount: Deactivated successfully.
Jan 22 05:04:55 np0005591760 podman[274428]: 2026-01-22 10:04:55.935468171 +0000 UTC m=+0.389337285 container remove dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_wing, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 22 05:04:55 np0005591760 systemd[1]: libpod-conmon-dca64f5b93ede0bbcf972872f0ba18f2fc5bfa71517f288f801d8f29b468ae00.scope: Deactivated successfully.
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:55 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.34053597 +0000 UTC m=+0.027696382 container create ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 05:04:56 np0005591760 systemd[1]: Started libpod-conmon-ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14.scope.
Jan 22 05:04:56 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.387464028 +0000 UTC m=+0.074624460 container init ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.393046074 +0000 UTC m=+0.080206487 container start ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_liskov, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.394243823 +0000 UTC m=+0.081404246 container attach ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 05:04:56 np0005591760 hardcore_liskov[274561]: 167 167
Jan 22 05:04:56 np0005591760 systemd[1]: libpod-ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14.scope: Deactivated successfully.
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.396863413 +0000 UTC m=+0.084023835 container died ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_liskov, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:04:56 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e898be034f35bc1010ffc84e8644244ace8077dd3d4635db0a22cd26af35c520-merged.mount: Deactivated successfully.
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.411643417 +0000 UTC m=+0.098803828 container remove ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 05:04:56 np0005591760 podman[274548]: 2026-01-22 10:04:56.329291093 +0000 UTC m=+0.016451525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:04:56 np0005591760 systemd[1]: libpod-conmon-ddeef3b2c3a5343095578403e0247fdd1eac62087717e581c23132f6a510ca14.scope: Deactivated successfully.
Jan 22 05:04:56 np0005591760 podman[274584]: 2026-01-22 10:04:56.53067553 +0000 UTC m=+0.027617712 container create 8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Jan 22 05:04:56 np0005591760 systemd[1]: Started libpod-conmon-8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d.scope.
Jan 22 05:04:56 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:04:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40bc6259c4bf39f8e9fb969723e36f6bc40a84cf8fb80e7e4d91b73b33e6da45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40bc6259c4bf39f8e9fb969723e36f6bc40a84cf8fb80e7e4d91b73b33e6da45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40bc6259c4bf39f8e9fb969723e36f6bc40a84cf8fb80e7e4d91b73b33e6da45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:56 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40bc6259c4bf39f8e9fb969723e36f6bc40a84cf8fb80e7e4d91b73b33e6da45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:56 np0005591760 podman[274584]: 2026-01-22 10:04:56.575815417 +0000 UTC m=+0.072757598 container init 8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:04:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:56.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:56 np0005591760 podman[274584]: 2026-01-22 10:04:56.580316675 +0000 UTC m=+0.077258857 container start 8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:04:56 np0005591760 podman[274584]: 2026-01-22 10:04:56.581431167 +0000 UTC m=+0.078373348 container attach 8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 05:04:56 np0005591760 podman[274584]: 2026-01-22 10:04:56.520045283 +0000 UTC m=+0.016987485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:04:56 np0005591760 nifty_bell[274597]: {
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:    "0": [
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:        {
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "devices": [
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "/dev/loop3"
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            ],
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "lv_name": "ceph_lv0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "lv_size": "21470642176",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "name": "ceph_lv0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "tags": {
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.cluster_name": "ceph",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.crush_device_class": "",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.encrypted": "0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.osd_id": "0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.type": "block",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.vdo": "0",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:                "ceph.with_tpm": "0"
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            },
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "type": "block",
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:            "vg_name": "ceph_vg0"
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:        }
Jan 22 05:04:56 np0005591760 nifty_bell[274597]:    ]
Jan 22 05:04:56 np0005591760 nifty_bell[274597]: }
Jan 22 05:04:56 np0005591760 systemd[1]: libpod-8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d.scope: Deactivated successfully.
Jan 22 05:04:56 np0005591760 podman[274607]: 2026-01-22 10:04:56.839536319 +0000 UTC m=+0.017451341 container died 8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 05:04:56 np0005591760 systemd[1]: var-lib-containers-storage-overlay-40bc6259c4bf39f8e9fb969723e36f6bc40a84cf8fb80e7e4d91b73b33e6da45-merged.mount: Deactivated successfully.
Jan 22 05:04:56 np0005591760 podman[274607]: 2026-01-22 10:04:56.86289541 +0000 UTC m=+0.040810433 container remove 8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 22 05:04:56 np0005591760 systemd[1]: libpod-conmon-8f46bf60e66ce5a41ea08a93da786822b98afa7fa7d0791d9e552ac27634c45d.scope: Deactivated successfully.
Jan 22 05:04:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:04:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:57.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:57.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:57.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:57.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.280245861 +0000 UTC m=+0.027747439 container create add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bouman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 05:04:57 np0005591760 systemd[1]: Started libpod-conmon-add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1.scope.
Jan 22 05:04:57 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.34297119 +0000 UTC m=+0.090472767 container init add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.347667567 +0000 UTC m=+0.095169134 container start add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 05:04:57 np0005591760 admiring_bouman[274714]: 167 167
Jan 22 05:04:57 np0005591760 systemd[1]: libpod-add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1.scope: Deactivated successfully.
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.351298806 +0000 UTC m=+0.098800382 container attach add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.353903216 +0000 UTC m=+0.101404793 container died add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:04:57 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1a35552d75606603d9845a0c63b219dbdd391056a3fb48ff71265e8ca3987cb6-merged.mount: Deactivated successfully.
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.268165259 +0000 UTC m=+0.015666846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:04:57 np0005591760 podman[274701]: 2026-01-22 10:04:57.371562218 +0000 UTC m=+0.119063795 container remove add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 22 05:04:57 np0005591760 systemd[1]: libpod-conmon-add34c4b7cd5f85a8d21d9de6e7bef2367441b4a3153b3d278376babecbc80d1.scope: Deactivated successfully.
Jan 22 05:04:57 np0005591760 podman[274736]: 2026-01-22 10:04:57.492685295 +0000 UTC m=+0.029074571 container create d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 22 05:04:57 np0005591760 systemd[1]: Started libpod-conmon-d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc.scope.
Jan 22 05:04:57 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:04:57 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdab010b2f64930d2140963724cb70295d16bad37acd0cd966e98c1831edf4ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:57 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdab010b2f64930d2140963724cb70295d16bad37acd0cd966e98c1831edf4ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:57 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdab010b2f64930d2140963724cb70295d16bad37acd0cd966e98c1831edf4ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:57 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdab010b2f64930d2140963724cb70295d16bad37acd0cd966e98c1831edf4ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:04:57 np0005591760 podman[274736]: 2026-01-22 10:04:57.541927487 +0000 UTC m=+0.078316783 container init d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 05:04:57 np0005591760 podman[274736]: 2026-01-22 10:04:57.547972728 +0000 UTC m=+0.084362003 container start d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 05:04:57 np0005591760 podman[274736]: 2026-01-22 10:04:57.548948539 +0000 UTC m=+0.085337804 container attach d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 05:04:57 np0005591760 podman[274736]: 2026-01-22 10:04:57.480359932 +0000 UTC m=+0.016749227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:04:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:57] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:04:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:04:57] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:04:58 np0005591760 nova_compute[248045]: 2026-01-22 10:04:57.999 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:04:58 np0005591760 lvm[274826]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:04:58 np0005591760 lvm[274826]: VG ceph_vg0 finished
Jan 22 05:04:58 np0005591760 funny_chandrasekhar[274749]: {}
Jan 22 05:04:58 np0005591760 systemd[1]: libpod-d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc.scope: Deactivated successfully.
Jan 22 05:04:58 np0005591760 podman[274828]: 2026-01-22 10:04:58.110009433 +0000 UTC m=+0.036470278 container died d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chandrasekhar, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 05:04:58 np0005591760 systemd[1]: var-lib-containers-storage-overlay-bdab010b2f64930d2140963724cb70295d16bad37acd0cd966e98c1831edf4ab-merged.mount: Deactivated successfully.
Jan 22 05:04:58 np0005591760 podman[274828]: 2026-01-22 10:04:58.163304549 +0000 UTC m=+0.089765363 container remove d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:04:58 np0005591760 systemd[1]: libpod-conmon-d765865430bf81bc04a318c90ab9eb1599f967e3bbbfc7b6c4e3edc008a88efc.scope: Deactivated successfully.
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.568056) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076298568084, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 504, "num_deletes": 250, "total_data_size": 640716, "memory_usage": 650296, "flush_reason": "Manual Compaction"}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076298570605, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 616510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28741, "largest_seqno": 29244, "table_properties": {"data_size": 613565, "index_size": 917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6079, "raw_average_key_size": 16, "raw_value_size": 607755, "raw_average_value_size": 1683, "num_data_blocks": 38, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769076278, "oldest_key_time": 1769076278, "file_creation_time": 1769076298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 2574 microseconds, and 1876 cpu microseconds.
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.570629) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 616510 bytes OK
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.570640) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.571240) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.571249) EVENT_LOG_v1 {"time_micros": 1769076298571247, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.571258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 637802, prev total WAL file size 637802, number of live WAL files 2.
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.571529) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(602KB)], [62(15MB)]
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076298571561, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17181960, "oldest_snapshot_seqno": -1}
Jan 22 05:04:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:04:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6341 keys, 15941704 bytes, temperature: kUnknown
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076298610727, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 15941704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15897638, "index_size": 27124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 164486, "raw_average_key_size": 25, "raw_value_size": 15781490, "raw_average_value_size": 2488, "num_data_blocks": 1091, "num_entries": 6341, "num_filter_entries": 6341, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769076298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.611018) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 15941704 bytes
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.611396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 436.8 rd, 405.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 15.8 +0.0 blob) out(15.2 +0.0 blob), read-write-amplify(53.7) write-amplify(25.9) OK, records in: 6858, records dropped: 517 output_compression: NoCompression
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.611410) EVENT_LOG_v1 {"time_micros": 1769076298611404, "job": 34, "event": "compaction_finished", "compaction_time_micros": 39336, "compaction_time_cpu_micros": 24577, "output_level": 6, "num_output_files": 1, "total_output_size": 15941704, "num_input_records": 6858, "num_output_records": 6341, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076298611674, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076298613530, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.571480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.613569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.613573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.613574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.613576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:58 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:04:58.613577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:04:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:58.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:58.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:58.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:04:58.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:04:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:04:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:04:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:04:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:04:59 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:59 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:04:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:05:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:04:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:00 np0005591760 nova_compute[248045]: 2026-01-22 10:05:00.537 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:00.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:00.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 22 05:05:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:02.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:03 np0005591760 nova_compute[248045]: 2026-01-22 10:05:03.000 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:05:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:03.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:03.585Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:03.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:03.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:04.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:05:05 np0005591760 nova_compute[248045]: 2026-01-22 10:05:05.539 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 05:05:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:07.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:07.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:07.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:07.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:08 np0005591760 nova_compute[248045]: 2026-01-22 10:05:08.002 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:08.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:08.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:08.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:08.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:08.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 05:05:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:10 np0005591760 nova_compute[248045]: 2026-01-22 10:05:10.541 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:10.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:10.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s
Jan 22 05:05:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:12.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:13 np0005591760 nova_compute[248045]: 2026-01-22 10:05:13.004 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 120 op/s
Jan 22 05:05:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:13.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:13.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:13.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:13.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:14.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 120 op/s
Jan 22 05:05:15 np0005591760 nova_compute[248045]: 2026-01-22 10:05:15.542 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:16.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s
Jan 22 05:05:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:17.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:17.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:17.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:17.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:18 np0005591760 nova_compute[248045]: 2026-01-22 10:05:18.006 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:18 np0005591760 nova_compute[248045]: 2026-01-22 10:05:18.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:18 np0005591760 nova_compute[248045]: 2026-01-22 10:05:18.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:05:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:18.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:18.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:18.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:18.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:18.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:18.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 05:05:19 np0005591760 nova_compute[248045]: 2026-01-22 10:05:19.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:05:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:05:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:20 np0005591760 nova_compute[248045]: 2026-01-22 10:05:20.544 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:20.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 05:05:21 np0005591760 podman[274913]: 2026-01-22 10:05:21.088353807 +0000 UTC m=+0.068988001 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:05:21 np0005591760 nova_compute[248045]: 2026-01-22 10:05:21.296 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.315 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.315 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.315 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:05:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:22.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:05:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299516995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.708 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.903 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.904 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4515MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.904 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.904 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.946 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.946 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:05:22 np0005591760 nova_compute[248045]: 2026-01-22 10:05:22.957 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:05:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:22.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:23 np0005591760 nova_compute[248045]: 2026-01-22 10:05:23.009 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:05:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714826167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:05:23 np0005591760 nova_compute[248045]: 2026-01-22 10:05:23.345 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:05:23 np0005591760 nova_compute[248045]: 2026-01-22 10:05:23.349 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:05:23 np0005591760 nova_compute[248045]: 2026-01-22 10:05:23.366 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:05:23 np0005591760 nova_compute[248045]: 2026-01-22 10:05:23.367 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:05:23 np0005591760 nova_compute[248045]: 2026-01-22 10:05:23.367 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:05:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:23.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:23.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:23.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:23.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:24 np0005591760 podman[274976]: 2026-01-22 10:05:24.117734891 +0000 UTC m=+0.099645716 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 05:05:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:24.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:05:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:24.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:05:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:25 np0005591760 nova_compute[248045]: 2026-01-22 10:05:25.545 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:26 np0005591760 nova_compute[248045]: 2026-01-22 10:05:26.368 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:26 np0005591760 nova_compute[248045]: 2026-01-22 10:05:26.369 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:05:26 np0005591760 nova_compute[248045]: 2026-01-22 10:05:26.369 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:05:26 np0005591760 nova_compute[248045]: 2026-01-22 10:05:26.379 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:05:26 np0005591760 nova_compute[248045]: 2026-01-22 10:05:26.379 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:26 np0005591760 nova_compute[248045]: 2026-01-22 10:05:26.379 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:26.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:27.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:27.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:27.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:27.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:27 np0005591760 nova_compute[248045]: 2026-01-22 10:05:27.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:05:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:27] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:05:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:27] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:05:28 np0005591760 nova_compute[248045]: 2026-01-22 10:05:28.011 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:05:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:28.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:05:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:28.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:28.935Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:28.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:28.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:28.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:30 np0005591760 nova_compute[248045]: 2026-01-22 10:05:30.546 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:30.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:30.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:05:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:32.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:05:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:32.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:33 np0005591760 nova_compute[248045]: 2026-01-22 10:05:33.014 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:33.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:33.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:33.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:33.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:34.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:34.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:35 np0005591760 nova_compute[248045]: 2026-01-22 10:05:35.550 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:37.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:37.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:37.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:37.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:37.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:38 np0005591760 nova_compute[248045]: 2026-01-22 10:05:38.017 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:38.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:38.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:38.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:38.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:38.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:39.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:40 np0005591760 nova_compute[248045]: 2026-01-22 10:05:40.550 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:40.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:41.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:43.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:43 np0005591760 nova_compute[248045]: 2026-01-22 10:05:43.020 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:43.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:43.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:43.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:43.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:44.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:45.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:45 np0005591760 nova_compute[248045]: 2026-01-22 10:05:45.552 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:46.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:47.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:47.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:47.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:47.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:47.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:05:47.323 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:05:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:05:47.324 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:05:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:05:47.324 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:05:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:48 np0005591760 nova_compute[248045]: 2026-01-22 10:05:48.021 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:48.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:48.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:48.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:48.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:48.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:05:49
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:05:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:05:50 np0005591760 nova_compute[248045]: 2026-01-22 10:05:50.556 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:05:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:50.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.800862) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076350800904, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 680, "num_deletes": 251, "total_data_size": 925835, "memory_usage": 938120, "flush_reason": "Manual Compaction"}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076350806514, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 912146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29245, "largest_seqno": 29924, "table_properties": {"data_size": 908665, "index_size": 1325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7960, "raw_average_key_size": 19, "raw_value_size": 901741, "raw_average_value_size": 2167, "num_data_blocks": 60, "num_entries": 416, "num_filter_entries": 416, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769076299, "oldest_key_time": 1769076299, "file_creation_time": 1769076350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 5688 microseconds, and 4630 cpu microseconds.
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.806553) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 912146 bytes OK
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.806574) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.807073) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.807085) EVENT_LOG_v1 {"time_micros": 1769076350807081, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.807101) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 922360, prev total WAL file size 922360, number of live WAL files 2.
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.807578) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(890KB)], [65(15MB)]
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076350807616, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16853850, "oldest_snapshot_seqno": -1}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6246 keys, 14812352 bytes, temperature: kUnknown
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076350842216, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14812352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14769820, "index_size": 25846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 163168, "raw_average_key_size": 26, "raw_value_size": 14656216, "raw_average_value_size": 2346, "num_data_blocks": 1034, "num_entries": 6246, "num_filter_entries": 6246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769076350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.842638) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14812352 bytes
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.843066) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 482.7 rd, 424.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 15.2 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(34.7) write-amplify(16.2) OK, records in: 6757, records dropped: 511 output_compression: NoCompression
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.843079) EVENT_LOG_v1 {"time_micros": 1769076350843073, "job": 36, "event": "compaction_finished", "compaction_time_micros": 34918, "compaction_time_cpu_micros": 24346, "output_level": 6, "num_output_files": 1, "total_output_size": 14812352, "num_input_records": 6757, "num_output_records": 6246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076350843445, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076350845637, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.807462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.845709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.845716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.845727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.845729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:05:50 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:05:50.845731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:05:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:51 np0005591760 ceph-mgr[74522]: [devicehealth INFO root] Check health
Jan 22 05:05:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:52 np0005591760 podman[275053]: 2026-01-22 10:05:52.053553193 +0000 UTC m=+0.041164279 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 05:05:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:53.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:53 np0005591760 nova_compute[248045]: 2026-01-22 10:05:53.023 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:53.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:53.807Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:53.807Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:53.808Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:54 np0005591760 podman[275095]: 2026-01-22 10:05:54.449398317 +0000 UTC m=+0.059350406 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 05:05:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:05:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:54.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:05:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:55 np0005591760 nova_compute[248045]: 2026-01-22 10:05:55.558 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:56.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:05:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:05:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:05:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:57.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:05:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:57.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:57.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:57.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:57] "GET /metrics HTTP/1.1" 200 48597 "" "Prometheus/2.51.0"
Jan 22 05:05:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:05:57] "GET /metrics HTTP/1.1" 200 48597 "" "Prometheus/2.51.0"
Jan 22 05:05:58 np0005591760 nova_compute[248045]: 2026-01-22 10:05:58.025 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:05:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:05:58.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:58 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:58.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:58.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:58.946Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:05:58.946Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:05:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:05:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:05:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:05:59.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:05:59 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:05:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.063276788 +0000 UTC m=+0.028932423 container create 93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:06:00 np0005591760 systemd[1]: Started libpod-conmon-93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0.scope.
Jan 22 05:06:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.113614808 +0000 UTC m=+0.079270452 container init 93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.119488584 +0000 UTC m=+0.085144218 container start 93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.120719576 +0000 UTC m=+0.086375210 container attach 93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_aryabhata, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 05:06:00 np0005591760 dazzling_aryabhata[275366]: 167 167
Jan 22 05:06:00 np0005591760 systemd[1]: libpod-93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0.scope: Deactivated successfully.
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.12469847 +0000 UTC m=+0.090354114 container died 93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:06:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-74f1e0ac6a01f15aef6b12e534c20eeaf17c31c97e56b3de0cc132bc4edc8e2e-merged.mount: Deactivated successfully.
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.145235349 +0000 UTC m=+0.110890983 container remove 93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:06:00 np0005591760 podman[275353]: 2026-01-22 10:06:00.051284273 +0000 UTC m=+0.016939917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:06:00 np0005591760 systemd[1]: libpod-conmon-93b8627f7acfd6025331fef32750ecd3bb3851a00ec0ca058587d17a56a386e0.scope: Deactivated successfully.
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.26665747 +0000 UTC m=+0.029243269 container create d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_proskuriakova, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:00 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:06:00 np0005591760 systemd[1]: Started libpod-conmon-d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67.scope.
Jan 22 05:06:00 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:06:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216dc3bfbbca402d812aaa582364034e6bc2a717008b952f3a0714e596cf3d87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216dc3bfbbca402d812aaa582364034e6bc2a717008b952f3a0714e596cf3d87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216dc3bfbbca402d812aaa582364034e6bc2a717008b952f3a0714e596cf3d87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216dc3bfbbca402d812aaa582364034e6bc2a717008b952f3a0714e596cf3d87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:00 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216dc3bfbbca402d812aaa582364034e6bc2a717008b952f3a0714e596cf3d87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.328144944 +0000 UTC m=+0.090730762 container init d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.332793522 +0000 UTC m=+0.095379310 container start d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.333975851 +0000 UTC m=+0.096561660 container attach d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_proskuriakova, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.254619047 +0000 UTC m=+0.017204865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:06:00 np0005591760 nova_compute[248045]: 2026-01-22 10:06:00.559 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:00 np0005591760 interesting_proskuriakova[275402]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:06:00 np0005591760 interesting_proskuriakova[275402]: --> All data devices are unavailable
Jan 22 05:06:00 np0005591760 systemd[1]: libpod-d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67.scope: Deactivated successfully.
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.60096656 +0000 UTC m=+0.363552368 container died d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_proskuriakova, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 05:06:00 np0005591760 systemd[1]: var-lib-containers-storage-overlay-216dc3bfbbca402d812aaa582364034e6bc2a717008b952f3a0714e596cf3d87-merged.mount: Deactivated successfully.
Jan 22 05:06:00 np0005591760 podman[275388]: 2026-01-22 10:06:00.622816745 +0000 UTC m=+0.385402544 container remove d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:06:00 np0005591760 systemd[1]: libpod-conmon-d43ab9e9bcc322f9f07dcbaf15f874dcc3bf336c7df8888fbcd62fa47e42cb67.scope: Deactivated successfully.
Jan 22 05:06:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:00.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.030423682 +0000 UTC m=+0.029781835 container create 6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 22 05:06:01 np0005591760 systemd[1]: Started libpod-conmon-6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61.scope.
Jan 22 05:06:01 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.089227836 +0000 UTC m=+0.088585999 container init 6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_leakey, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.093474875 +0000 UTC m=+0.092833029 container start 6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_leakey, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.094855529 +0000 UTC m=+0.094213682 container attach 6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:06:01 np0005591760 gracious_leakey[275523]: 167 167
Jan 22 05:06:01 np0005591760 systemd[1]: libpod-6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61.scope: Deactivated successfully.
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.096800918 +0000 UTC m=+0.096159071 container died 6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_leakey, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:06:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-6cbdae9f624c1356eba42930e5e84e5052b93baaeb5463e09b5237c93d3dc43d-merged.mount: Deactivated successfully.
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.019203983 +0000 UTC m=+0.018562157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:06:01 np0005591760 podman[275509]: 2026-01-22 10:06:01.116771269 +0000 UTC m=+0.116129421 container remove 6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_leakey, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 05:06:01 np0005591760 systemd[1]: libpod-conmon-6c632394bc75579763350ae3466ab32faa2948cec9cd9b4bf5b314e1c8927f61.scope: Deactivated successfully.
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.23819326 +0000 UTC m=+0.029294886 container create fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:06:01 np0005591760 systemd[1]: Started libpod-conmon-fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de.scope.
Jan 22 05:06:01 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:06:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cec33bcf657fb9511872b975435c3baa4d0286bd70f7c4ba27eaa19ecdd83b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cec33bcf657fb9511872b975435c3baa4d0286bd70f7c4ba27eaa19ecdd83b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cec33bcf657fb9511872b975435c3baa4d0286bd70f7c4ba27eaa19ecdd83b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:01 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cec33bcf657fb9511872b975435c3baa4d0286bd70f7c4ba27eaa19ecdd83b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.301198638 +0000 UTC m=+0.092300274 container init fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_nobel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.306139235 +0000 UTC m=+0.097240861 container start fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.307806318 +0000 UTC m=+0.098907944 container attach fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.226631487 +0000 UTC m=+0.017733133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]: {
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:    "0": [
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:        {
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "devices": [
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "/dev/loop3"
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            ],
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "lv_name": "ceph_lv0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "lv_size": "21470642176",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "name": "ceph_lv0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "tags": {
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.cluster_name": "ceph",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.crush_device_class": "",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.encrypted": "0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.osd_id": "0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.type": "block",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.vdo": "0",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:                "ceph.with_tpm": "0"
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            },
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "type": "block",
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:            "vg_name": "ceph_vg0"
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:        }
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]:    ]
Jan 22 05:06:01 np0005591760 vigorous_nobel[275558]: }
Jan 22 05:06:01 np0005591760 systemd[1]: libpod-fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de.scope: Deactivated successfully.
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.547767304 +0000 UTC m=+0.338868930 container died fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 05:06:01 np0005591760 systemd[1]: var-lib-containers-storage-overlay-16cec33bcf657fb9511872b975435c3baa4d0286bd70f7c4ba27eaa19ecdd83b-merged.mount: Deactivated successfully.
Jan 22 05:06:01 np0005591760 podman[275544]: 2026-01-22 10:06:01.568685401 +0000 UTC m=+0.359787028 container remove fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_nobel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 22 05:06:01 np0005591760 systemd[1]: libpod-conmon-fae3008eb63b20b999e88aab6f2186c3e28134f44de31b5bfa6cfc1ef3ec12de.scope: Deactivated successfully.
Jan 22 05:06:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:06:01 np0005591760 podman[275658]: 2026-01-22 10:06:01.990938739 +0000 UTC m=+0.030546186 container create a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:06:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:02 np0005591760 systemd[1]: Started libpod-conmon-a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d.scope.
Jan 22 05:06:02 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:06:02 np0005591760 podman[275658]: 2026-01-22 10:06:02.043894043 +0000 UTC m=+0.083501480 container init a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_bohr, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:06:02 np0005591760 podman[275658]: 2026-01-22 10:06:02.048435648 +0000 UTC m=+0.088043085 container start a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_bohr, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:06:02 np0005591760 podman[275658]: 2026-01-22 10:06:02.049581458 +0000 UTC m=+0.089188896 container attach a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:06:02 np0005591760 thirsty_bohr[275672]: 167 167
Jan 22 05:06:02 np0005591760 systemd[1]: libpod-a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d.scope: Deactivated successfully.
Jan 22 05:06:02 np0005591760 podman[275658]: 2026-01-22 10:06:02.051821593 +0000 UTC m=+0.091429040 container died a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_bohr, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:06:02 np0005591760 systemd[1]: var-lib-containers-storage-overlay-3e867b0cbd2bcbc2dd9a3b093938ccec66c0a16573fef8b540e0bccc541e9ee5-merged.mount: Deactivated successfully.
Jan 22 05:06:02 np0005591760 podman[275658]: 2026-01-22 10:06:02.069361972 +0000 UTC m=+0.108969409 container remove a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Jan 22 05:06:02 np0005591760 podman[275658]: 2026-01-22 10:06:01.979412442 +0000 UTC m=+0.019019889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:06:02 np0005591760 systemd[1]: libpod-conmon-a62664f165c0267a4788ec84a62c438acfafdbf4744300a29fef02e8a733d80d.scope: Deactivated successfully.
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.189166673 +0000 UTC m=+0.028357649 container create d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_antonelli, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 05:06:02 np0005591760 systemd[1]: Started libpod-conmon-d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae.scope.
Jan 22 05:06:02 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:06:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b1ab2fc62c3a45a19a279b371f7eaffc56fa36d03023b9fb1b4fb9efd0d186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b1ab2fc62c3a45a19a279b371f7eaffc56fa36d03023b9fb1b4fb9efd0d186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b1ab2fc62c3a45a19a279b371f7eaffc56fa36d03023b9fb1b4fb9efd0d186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:02 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b1ab2fc62c3a45a19a279b371f7eaffc56fa36d03023b9fb1b4fb9efd0d186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.258088226 +0000 UTC m=+0.097279212 container init d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_antonelli, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.264751612 +0000 UTC m=+0.103942589 container start d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.266003673 +0000 UTC m=+0.105194670 container attach d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.178232162 +0000 UTC m=+0.017423158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:06:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:02 np0005591760 zen_antonelli[275708]: {}
Jan 22 05:06:02 np0005591760 lvm[275786]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:06:02 np0005591760 lvm[275786]: VG ceph_vg0 finished
Jan 22 05:06:02 np0005591760 systemd[1]: libpod-d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae.scope: Deactivated successfully.
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.774804365 +0000 UTC m=+0.613995351 container died d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 05:06:02 np0005591760 systemd[1]: var-lib-containers-storage-overlay-42b1ab2fc62c3a45a19a279b371f7eaffc56fa36d03023b9fb1b4fb9efd0d186-merged.mount: Deactivated successfully.
Jan 22 05:06:02 np0005591760 podman[275695]: 2026-01-22 10:06:02.802545329 +0000 UTC m=+0.641736306 container remove d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_antonelli, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:06:02 np0005591760 systemd[1]: libpod-conmon-d16988fc71afb2e953c95a9c74efceeeff3a6efa4be5263335e2bd01b994afae.scope: Deactivated successfully.
Jan 22 05:06:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:06:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:06:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:03 np0005591760 nova_compute[248045]: 2026-01-22 10:06:03.026 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:03.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:03.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:03.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:03.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:06:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:06:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:05 np0005591760 nova_compute[248045]: 2026-01-22 10:06:05.561 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:06:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:06.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:07.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:07.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:07.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:07.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:06:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:06:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:06:08 np0005591760 nova_compute[248045]: 2026-01-22 10:06:08.028 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:08.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:08.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:08.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:08.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:08.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 22 05:06:10 np0005591760 nova_compute[248045]: 2026-01-22 10:06:10.562 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:11.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:13 np0005591760 nova_compute[248045]: 2026-01-22 10:06:13.030 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:13.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:13.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:13.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:13.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:15 np0005591760 nova_compute[248045]: 2026-01-22 10:06:15.564 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:16.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:17 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:17.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:17.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:17.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:17.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:06:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:06:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:18 np0005591760 nova_compute[248045]: 2026-01-22 10:06:18.032 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:18.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:18.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:18.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:18.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:18.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:19 np0005591760 nova_compute[248045]: 2026-01-22 10:06:19.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:19 np0005591760 nova_compute[248045]: 2026-01-22 10:06:19.299 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:06:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:20 np0005591760 nova_compute[248045]: 2026-01-22 10:06:20.566 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:20.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:21 np0005591760 nova_compute[248045]: 2026-01-22 10:06:21.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:22 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.318 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.350 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.351 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.351 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.351 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.351 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:06:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:22.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:06:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866168495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.694 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.911 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.912 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.912 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.913 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.967 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.968 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:06:22 np0005591760 nova_compute[248045]: 2026-01-22 10:06:22.991 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:06:23 np0005591760 nova_compute[248045]: 2026-01-22 10:06:23.034 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:23 np0005591760 podman[275888]: 2026-01-22 10:06:23.053444047 +0000 UTC m=+0.044734242 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 05:06:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:06:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387182051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:06:23 np0005591760 nova_compute[248045]: 2026-01-22 10:06:23.333 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:06:23 np0005591760 nova_compute[248045]: 2026-01-22 10:06:23.338 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:06:23 np0005591760 nova_compute[248045]: 2026-01-22 10:06:23.350 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:06:23 np0005591760 nova_compute[248045]: 2026-01-22 10:06:23.351 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:06:23 np0005591760 nova_compute[248045]: 2026-01-22 10:06:23.352 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:06:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:23.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:23.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:23.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:23.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:25.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:25 np0005591760 podman[275928]: 2026-01-22 10:06:25.077769727 +0000 UTC m=+0.070714447 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 05:06:25 np0005591760 nova_compute[248045]: 2026-01-22 10:06:25.333 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:25 np0005591760 nova_compute[248045]: 2026-01-22 10:06:25.569 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:26.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:26 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:26 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:26 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:27.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:27.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:27.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:27.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:27.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.314 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.314 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:27 np0005591760 nova_compute[248045]: 2026-01-22 10:06:27.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:06:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:27] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:06:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:27] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:06:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:28 np0005591760 nova_compute[248045]: 2026-01-22 10:06:28.035 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:28.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:28.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:28.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:28.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:30 np0005591760 nova_compute[248045]: 2026-01-22 10:06:30.570 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:30.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:32 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:32.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:33 np0005591760 nova_compute[248045]: 2026-01-22 10:06:33.037 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:33.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:33.591Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:33.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:33.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:06:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:34.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:06:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:35 np0005591760 nova_compute[248045]: 2026-01-22 10:06:35.572 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:36.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:36 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:36 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:37.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:37.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:37.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:37.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:37] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:06:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:37] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:06:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:38 np0005591760 nova_compute[248045]: 2026-01-22 10:06:38.039 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:38.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:38.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:38.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:38.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:38.943Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:39.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=cleanup t=2026-01-22T10:06:39.966066407Z level=info msg="Completed cleanup jobs" duration=3.968764ms
Jan 22 05:06:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=plugins.update.checker t=2026-01-22T10:06:40.072733533Z level=info msg="Update check succeeded" duration=50.843081ms
Jan 22 05:06:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=grafana.update.checker t=2026-01-22T10:06:40.074396489Z level=info msg="Update check succeeded" duration=53.957103ms
Jan 22 05:06:40 np0005591760 nova_compute[248045]: 2026-01-22 10:06:40.573 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:06:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:40.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:06:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:41 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:41 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:41 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:42 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:42.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:43 np0005591760 nova_compute[248045]: 2026-01-22 10:06:43.041 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:43.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:43.578Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:43.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:43.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:43.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:44.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:45.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:45 np0005591760 nova_compute[248045]: 2026-01-22 10:06:45.576 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:46.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:46 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:47.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:06:47.324 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:06:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:06:47.324 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:06:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:06:47.324 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:06:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:47] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:06:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:47] "GET /metrics HTTP/1.1" 200 48596 "" "Prometheus/2.51.0"
Jan 22 05:06:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:48 np0005591760 nova_compute[248045]: 2026-01-22 10:06:48.044 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:48.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:48.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:48.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:48.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:06:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:06:49
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.nfs']
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:06:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:50 np0005591760 nova_compute[248045]: 2026-01-22 10:06:50.578 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:06:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:06:53 np0005591760 nova_compute[248045]: 2026-01-22 10:06:53.046 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:53.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:53.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:53.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:53.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:54 np0005591760 podman[276006]: 2026-01-22 10:06:54.050292221 +0000 UTC m=+0.037259836 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 05:06:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:54.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:55 np0005591760 nova_compute[248045]: 2026-01-22 10:06:55.579 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:06:56 np0005591760 podman[276049]: 2026-01-22 10:06:56.071492308 +0000 UTC m=+0.059543319 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 22 05:06:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:06:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:06:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:57.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:57.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:57.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:57.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:57] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:06:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:06:57] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:06:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:58 np0005591760 nova_compute[248045]: 2026-01-22 10:06:58.047 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:06:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:06:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:06:58.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:58.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:58.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:58.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:06:58.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:06:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:06:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:06:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:06:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:06:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:07:00 np0005591760 nova_compute[248045]: 2026-01-22 10:07:00.581 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:00.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:01.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:02.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:03 np0005591760 nova_compute[248045]: 2026-01-22 10:07:03.048 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:03.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:03.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:03.829Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:03.829Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:03.829Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:03 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:04.003113935 +0000 UTC m=+0.028742666 container create 2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 05:07:04 np0005591760 systemd[1]: Started libpod-conmon-2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923.scope.
Jan 22 05:07:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:04.054236945 +0000 UTC m=+0.079865695 container init 2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:04.058613218 +0000 UTC m=+0.084241947 container start 2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_lamport, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:04.059936373 +0000 UTC m=+0.085565103 container attach 2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_lamport, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 05:07:04 np0005591760 hardcore_lamport[276251]: 167 167
Jan 22 05:07:04 np0005591760 systemd[1]: libpod-2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923.scope: Deactivated successfully.
Jan 22 05:07:04 np0005591760 conmon[276251]: conmon 2c2a505701b098518bd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923.scope/container/memory.events
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:04.062992125 +0000 UTC m=+0.088620856 container died 2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 05:07:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-009df618ae4ee361098ff401f4a06041c87c4d7c850c66be14b9c2f7e2908fb2-merged.mount: Deactivated successfully.
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:04.081115554 +0000 UTC m=+0.106744284 container remove 2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:07:04 np0005591760 podman[276238]: 2026-01-22 10:07:03.991551281 +0000 UTC m=+0.017180031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:07:04 np0005591760 systemd[1]: libpod-conmon-2c2a505701b098518bd2497dac74df260cdd6070a5d4bc9f5ddd9371e21e2923.scope: Deactivated successfully.
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.200978685 +0000 UTC m=+0.027356781 container create 5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:07:04 np0005591760 systemd[1]: Started libpod-conmon-5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1.scope.
Jan 22 05:07:04 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:07:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be383c632a12679ddaa7c690a4d1bdfb34afcb94836d0d7cce1273f0a5814002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be383c632a12679ddaa7c690a4d1bdfb34afcb94836d0d7cce1273f0a5814002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be383c632a12679ddaa7c690a4d1bdfb34afcb94836d0d7cce1273f0a5814002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be383c632a12679ddaa7c690a4d1bdfb34afcb94836d0d7cce1273f0a5814002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:04 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be383c632a12679ddaa7c690a4d1bdfb34afcb94836d0d7cce1273f0a5814002/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.260589099 +0000 UTC m=+0.086967186 container init 5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.265555756 +0000 UTC m=+0.091933841 container start 5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_thompson, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.266851049 +0000 UTC m=+0.093229135 container attach 5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.189876488 +0000 UTC m=+0.016254594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:07:04 np0005591760 hopeful_thompson[276285]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:07:04 np0005591760 hopeful_thompson[276285]: --> All data devices are unavailable
Jan 22 05:07:04 np0005591760 systemd[1]: libpod-5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1.scope: Deactivated successfully.
Jan 22 05:07:04 np0005591760 conmon[276285]: conmon 5c5fab1596708ecde6b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1.scope/container/memory.events
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.535525462 +0000 UTC m=+0.361903548 container died 5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_thompson, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 05:07:04 np0005591760 systemd[1]: var-lib-containers-storage-overlay-be383c632a12679ddaa7c690a4d1bdfb34afcb94836d0d7cce1273f0a5814002-merged.mount: Deactivated successfully.
Jan 22 05:07:04 np0005591760 podman[276272]: 2026-01-22 10:07:04.555932517 +0000 UTC m=+0.382310602 container remove 5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_thompson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 05:07:04 np0005591760 systemd[1]: libpod-conmon-5c5fab1596708ecde6b12b25890e382aa7f3b318edc381a46ff4b985567fe6d1.scope: Deactivated successfully.
Jan 22 05:07:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:04 np0005591760 podman[276392]: 2026-01-22 10:07:04.962215617 +0000 UTC m=+0.026247228 container create cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:07:04 np0005591760 systemd[1]: Started libpod-conmon-cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7.scope.
Jan 22 05:07:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:07:05 np0005591760 podman[276392]: 2026-01-22 10:07:05.028175085 +0000 UTC m=+0.092206717 container init cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:07:05 np0005591760 podman[276392]: 2026-01-22 10:07:05.034182584 +0000 UTC m=+0.098214196 container start cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:07:05 np0005591760 podman[276392]: 2026-01-22 10:07:05.035251911 +0000 UTC m=+0.099283522 container attach cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:07:05 np0005591760 silly_moser[276405]: 167 167
Jan 22 05:07:05 np0005591760 systemd[1]: libpod-cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7.scope: Deactivated successfully.
Jan 22 05:07:05 np0005591760 podman[276392]: 2026-01-22 10:07:05.038409235 +0000 UTC m=+0.102440857 container died cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:07:05 np0005591760 podman[276392]: 2026-01-22 10:07:04.952209517 +0000 UTC m=+0.016241149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:07:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-20f5e2993a0481d9508db196a81dd92009aecdee52870b0615a694a5812aa961-merged.mount: Deactivated successfully.
Jan 22 05:07:05 np0005591760 podman[276392]: 2026-01-22 10:07:05.055113256 +0000 UTC m=+0.119144869 container remove cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 22 05:07:05 np0005591760 systemd[1]: libpod-conmon-cfe042aa3b46f9b59b7d3d91624451e03ff92c3237350f956cfdfd04a45f93a7.scope: Deactivated successfully.
Jan 22 05:07:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:05 np0005591760 podman[276427]: 2026-01-22 10:07:05.172733086 +0000 UTC m=+0.028751701 container create 0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:07:05 np0005591760 systemd[1]: Started libpod-conmon-0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3.scope.
Jan 22 05:07:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:07:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce73aa4bff5397cb20946d269b8e324784ba4ce59ba28da19561962631e9eadb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce73aa4bff5397cb20946d269b8e324784ba4ce59ba28da19561962631e9eadb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce73aa4bff5397cb20946d269b8e324784ba4ce59ba28da19561962631e9eadb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:05 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce73aa4bff5397cb20946d269b8e324784ba4ce59ba28da19561962631e9eadb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:05 np0005591760 podman[276427]: 2026-01-22 10:07:05.229320491 +0000 UTC m=+0.085339105 container init 0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 05:07:05 np0005591760 podman[276427]: 2026-01-22 10:07:05.236322546 +0000 UTC m=+0.092341161 container start 0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:07:05 np0005591760 podman[276427]: 2026-01-22 10:07:05.237525614 +0000 UTC m=+0.093544229 container attach 0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:07:05 np0005591760 podman[276427]: 2026-01-22 10:07:05.161502768 +0000 UTC m=+0.017521403 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]: {
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:    "0": [
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:        {
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "devices": [
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "/dev/loop3"
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            ],
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "lv_name": "ceph_lv0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "lv_size": "21470642176",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "name": "ceph_lv0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "tags": {
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.cluster_name": "ceph",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.crush_device_class": "",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.encrypted": "0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.osd_id": "0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.type": "block",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.vdo": "0",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:                "ceph.with_tpm": "0"
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            },
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "type": "block",
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:            "vg_name": "ceph_vg0"
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:        }
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]:    ]
Jan 22 05:07:05 np0005591760 dreamy_allen[276441]: }
Jan 22 05:07:05 np0005591760 systemd[1]: libpod-0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3.scope: Deactivated successfully.
Jan 22 05:07:05 np0005591760 podman[276450]: 2026-01-22 10:07:05.493949426 +0000 UTC m=+0.017298003 container died 0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:07:05 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ce73aa4bff5397cb20946d269b8e324784ba4ce59ba28da19561962631e9eadb-merged.mount: Deactivated successfully.
Jan 22 05:07:05 np0005591760 podman[276450]: 2026-01-22 10:07:05.515681549 +0000 UTC m=+0.039030116 container remove 0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:07:05 np0005591760 systemd[1]: libpod-conmon-0c36936e6e0a2c51f2e5bb2f99b03aac7702ebc86cac86dccb01caddd3b887a3.scope: Deactivated successfully.
Jan 22 05:07:05 np0005591760 nova_compute[248045]: 2026-01-22 10:07:05.582 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:05 np0005591760 podman[276542]: 2026-01-22 10:07:05.946355296 +0000 UTC m=+0.027695779 container create 2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 22 05:07:05 np0005591760 systemd[1]: Started libpod-conmon-2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c.scope.
Jan 22 05:07:05 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:07:05 np0005591760 podman[276542]: 2026-01-22 10:07:05.988831919 +0000 UTC m=+0.070172423 container init 2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:07:05 np0005591760 podman[276542]: 2026-01-22 10:07:05.992764876 +0000 UTC m=+0.074105360 container start 2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 05:07:05 np0005591760 podman[276542]: 2026-01-22 10:07:05.993855122 +0000 UTC m=+0.075195606 container attach 2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 22 05:07:05 np0005591760 naughty_meninsky[276556]: 167 167
Jan 22 05:07:05 np0005591760 systemd[1]: libpod-2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c.scope: Deactivated successfully.
Jan 22 05:07:05 np0005591760 conmon[276556]: conmon 2b80b74b4b2e97fe4d35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c.scope/container/memory.events
Jan 22 05:07:05 np0005591760 podman[276542]: 2026-01-22 10:07:05.996257432 +0000 UTC m=+0.077597917 container died 2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_meninsky, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 05:07:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4b66d5ac19bcb7106d0fe56b2f085d0ff9f8388714156657028c29f5f665252c-merged.mount: Deactivated successfully.
Jan 22 05:07:06 np0005591760 podman[276542]: 2026-01-22 10:07:06.013318627 +0000 UTC m=+0.094659111 container remove 2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_meninsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:07:06 np0005591760 podman[276542]: 2026-01-22 10:07:05.936035203 +0000 UTC m=+0.017375707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:07:06 np0005591760 systemd[1]: libpod-conmon-2b80b74b4b2e97fe4d35028bbf4349626022b2b96f352e5ff3d0607cce90466c.scope: Deactivated successfully.
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.130433135 +0000 UTC m=+0.027474383 container create fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldstine, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:07:06 np0005591760 systemd[1]: Started libpod-conmon-fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc.scope.
Jan 22 05:07:06 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:07:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e03b65955dbe7edc9c30d14929c221ace1d20e9b5d25947ac5c314bee29438/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e03b65955dbe7edc9c30d14929c221ace1d20e9b5d25947ac5c314bee29438/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e03b65955dbe7edc9c30d14929c221ace1d20e9b5d25947ac5c314bee29438/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:06 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e03b65955dbe7edc9c30d14929c221ace1d20e9b5d25947ac5c314bee29438/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.186887397 +0000 UTC m=+0.083928676 container init fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.19134845 +0000 UTC m=+0.088389698 container start fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.192582077 +0000 UTC m=+0.089623324 container attach fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldstine, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.119689484 +0000 UTC m=+0.016730762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:07:06 np0005591760 serene_goldstine[276592]: {}
Jan 22 05:07:06 np0005591760 lvm[276669]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:07:06 np0005591760 lvm[276669]: VG ceph_vg0 finished
Jan 22 05:07:06 np0005591760 systemd[1]: libpod-fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc.scope: Deactivated successfully.
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.687697831 +0000 UTC m=+0.584739108 container died fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 05:07:06 np0005591760 systemd[1]: var-lib-containers-storage-overlay-40e03b65955dbe7edc9c30d14929c221ace1d20e9b5d25947ac5c314bee29438-merged.mount: Deactivated successfully.
Jan 22 05:07:06 np0005591760 podman[276579]: 2026-01-22 10:07:06.710141466 +0000 UTC m=+0.607182713 container remove fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 05:07:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:06 np0005591760 systemd[1]: libpod-conmon-fa37ccf910eb294724b5d76098af4b06212288412bb36797c8728ccf02f779fc.scope: Deactivated successfully.
Jan 22 05:07:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:07:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:07:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:06 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:07:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:07.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:07.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:07.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:07.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:07.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:07:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:07] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:07:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:07:08 np0005591760 nova_compute[248045]: 2026-01-22 10:07:08.050 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:08.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:08.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:08.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:08.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:08.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:09.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:07:10 np0005591760 nova_compute[248045]: 2026-01-22 10:07:10.585 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:13 np0005591760 nova_compute[248045]: 2026-01-22 10:07:13.052 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:13.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:13.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:13.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:13.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:13.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:07:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:14.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:15 np0005591760 nova_compute[248045]: 2026-01-22 10:07:15.585 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:16.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:16 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:17 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:17.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:17.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:17.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:17.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:07:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:17] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:07:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:18 np0005591760 nova_compute[248045]: 2026-01-22 10:07:18.054 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:18.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:18.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:18.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:18.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:18.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:19.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:19 np0005591760 nova_compute[248045]: 2026-01-22 10:07:19.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:19 np0005591760 nova_compute[248045]: 2026-01-22 10:07:19.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:07:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:20 np0005591760 nova_compute[248045]: 2026-01-22 10:07:20.589 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:20.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:21.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:21 np0005591760 nova_compute[248045]: 2026-01-22 10:07:21.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:22 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:21 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.316 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.316 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.316 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:07:22 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:07:22 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014752428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.658 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:07:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.856 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.856 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4530MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.857 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.857 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.899 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.899 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:07:22 np0005591760 nova_compute[248045]: 2026-01-22 10:07:22.978 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:07:23 np0005591760 nova_compute[248045]: 2026-01-22 10:07:23.056 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:07:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156958577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:07:23 np0005591760 nova_compute[248045]: 2026-01-22 10:07:23.323 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:07:23 np0005591760 nova_compute[248045]: 2026-01-22 10:07:23.327 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:07:23 np0005591760 nova_compute[248045]: 2026-01-22 10:07:23.354 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:07:23 np0005591760 nova_compute[248045]: 2026-01-22 10:07:23.355 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:07:23 np0005591760 nova_compute[248045]: 2026-01-22 10:07:23.356 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:07:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:23.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:23.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:23.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:23.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:24 np0005591760 nova_compute[248045]: 2026-01-22 10:07:24.352 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:24.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:25 np0005591760 podman[276793]: 2026-01-22 10:07:25.045317581 +0000 UTC m=+0.036931497 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 05:07:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:25 np0005591760 nova_compute[248045]: 2026-01-22 10:07:25.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:25 np0005591760 nova_compute[248045]: 2026-01-22 10:07:25.590 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:26 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:27 np0005591760 podman[276812]: 2026-01-22 10:07:27.060694495 +0000 UTC m=+0.051220404 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 05:07:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:27.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:27.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:27.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:27.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:27 np0005591760 nova_compute[248045]: 2026-01-22 10:07:27.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:07:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:07:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:28 np0005591760 nova_compute[248045]: 2026-01-22 10:07:28.058 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:28 np0005591760 nova_compute[248045]: 2026-01-22 10:07:28.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:28 np0005591760 nova_compute[248045]: 2026-01-22 10:07:28.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:07:28 np0005591760 nova_compute[248045]: 2026-01-22 10:07:28.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:07:28 np0005591760 nova_compute[248045]: 2026-01-22 10:07:28.312 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:07:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:28.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:28.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:28.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:28.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:28.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:29.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:29 np0005591760 nova_compute[248045]: 2026-01-22 10:07:29.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:29 np0005591760 nova_compute[248045]: 2026-01-22 10:07:29.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:07:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:30 np0005591760 nova_compute[248045]: 2026-01-22 10:07:30.591 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:30.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:07:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:33 np0005591760 nova_compute[248045]: 2026-01-22 10:07:33.060 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:33.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:33.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:33.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:33.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:07:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:33.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:35 np0005591760 nova_compute[248045]: 2026-01-22 10:07:35.592 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:07:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:36.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:37.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:37.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:37.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:37.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:37.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:37] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:07:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:37] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:07:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:07:38 np0005591760 nova_compute[248045]: 2026-01-22 10:07:38.062 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:38.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:38.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:38.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:38.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:39.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:07:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:40 np0005591760 nova_compute[248045]: 2026-01-22 10:07:40.596 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:41.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:07:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:42.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:43 np0005591760 nova_compute[248045]: 2026-01-22 10:07:43.064 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:43.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:43.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:43.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:43.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:43.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:44.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:45 np0005591760 nova_compute[248045]: 2026-01-22 10:07:45.597 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:46.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:47.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:47.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:47.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:47.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:07:47.325 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:07:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:07:47.325 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:07:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:07:47.326 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:07:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:47] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:07:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:47] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:07:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:48 np0005591760 nova_compute[248045]: 2026-01-22 10:07:48.065 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:48.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:48.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:07:49
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', '.nfs', '.rgw.root']
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:07:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:50 np0005591760 nova_compute[248045]: 2026-01-22 10:07:50.599 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:07:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:07:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:52.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:53 np0005591760 nova_compute[248045]: 2026-01-22 10:07:53.068 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:53.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:53.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:53.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:53.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:54.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:07:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:07:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:07:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:07:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:55 np0005591760 nova_compute[248045]: 2026-01-22 10:07:55.600 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:07:56 np0005591760 podman[276914]: 2026-01-22 10:07:56.053243372 +0000 UTC m=+0.039410983 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 05:07:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:07:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:56.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:07:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:57.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:57.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:57.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:57.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:07:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:07:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:57] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:07:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:07:57] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:07:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:58 np0005591760 podman[276933]: 2026-01-22 10:07:58.069835618 +0000 UTC m=+0.064910792 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 05:07:58 np0005591760 nova_compute[248045]: 2026-01-22 10:07:58.069 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:07:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:07:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:07:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:58.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:58.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:58.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:07:58.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:07:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:07:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:07:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:07:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:07:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:08:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:07:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:00 np0005591760 nova_compute[248045]: 2026-01-22 10:08:00.602 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:00.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:02.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:03 np0005591760 nova_compute[248045]: 2026-01-22 10:08:03.071 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:03.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:03.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:03.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:03.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:04.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:08:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:05.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:08:05 np0005591760 nova_compute[248045]: 2026-01-22 10:08:05.604 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:06.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:07.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:07.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:07.161Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:07.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:07.232Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:07] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:08:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:07] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:07 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:08:07 np0005591760 podman[277124]: 2026-01-22 10:08:07.885312323 +0000 UTC m=+0.024853621 container create 764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:08:07 np0005591760 systemd[1]: Started libpod-conmon-764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388.scope.
Jan 22 05:08:07 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:08:07 np0005591760 podman[277124]: 2026-01-22 10:08:07.943315969 +0000 UTC m=+0.082857286 container init 764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 05:08:07 np0005591760 podman[277124]: 2026-01-22 10:08:07.947829339 +0000 UTC m=+0.087370637 container start 764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:08:07 np0005591760 podman[277124]: 2026-01-22 10:08:07.948930616 +0000 UTC m=+0.088471915 container attach 764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 22 05:08:07 np0005591760 agitated_faraday[277137]: 167 167
Jan 22 05:08:07 np0005591760 systemd[1]: libpod-764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388.scope: Deactivated successfully.
Jan 22 05:08:07 np0005591760 conmon[277137]: conmon 764e85a89a05f8bb8a63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388.scope/container/memory.events
Jan 22 05:08:07 np0005591760 podman[277124]: 2026-01-22 10:08:07.875738428 +0000 UTC m=+0.015279747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:08:07 np0005591760 podman[277142]: 2026-01-22 10:08:07.982054312 +0000 UTC m=+0.018223958 container died 764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 05:08:07 np0005591760 systemd[1]: var-lib-containers-storage-overlay-727cff7e999fa4cd827c8dab7a52acf95ca5e75878536cd5c2f51e98f4a36b71-merged.mount: Deactivated successfully.
Jan 22 05:08:08 np0005591760 podman[277142]: 2026-01-22 10:08:08.002375233 +0000 UTC m=+0.038544870 container remove 764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 05:08:08 np0005591760 systemd[1]: libpod-conmon-764e85a89a05f8bb8a632da2946131b800936fd1de9cbf40912bd17314be4388.scope: Deactivated successfully.
Jan 22 05:08:08 np0005591760 nova_compute[248045]: 2026-01-22 10:08:08.072 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.129417003 +0000 UTC m=+0.031024419 container create 95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:08:08 np0005591760 systemd[1]: Started libpod-conmon-95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5.scope.
Jan 22 05:08:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:08:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d502b99ac391d48af42499e08f0aee5be1a679f054692c94a16c3bcd63df155e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d502b99ac391d48af42499e08f0aee5be1a679f054692c94a16c3bcd63df155e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d502b99ac391d48af42499e08f0aee5be1a679f054692c94a16c3bcd63df155e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d502b99ac391d48af42499e08f0aee5be1a679f054692c94a16c3bcd63df155e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:08 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d502b99ac391d48af42499e08f0aee5be1a679f054692c94a16c3bcd63df155e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.188034306 +0000 UTC m=+0.089641731 container init 95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.193157967 +0000 UTC m=+0.094765382 container start 95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_yalow, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.194256409 +0000 UTC m=+0.095863824 container attach 95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_yalow, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.117213128 +0000 UTC m=+0.018820563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:08:08 np0005591760 crazy_yalow[277175]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:08:08 np0005591760 crazy_yalow[277175]: --> All data devices are unavailable
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.458697961 +0000 UTC m=+0.360305396 container died 95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 22 05:08:08 np0005591760 systemd[1]: libpod-95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5.scope: Deactivated successfully.
Jan 22 05:08:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-d502b99ac391d48af42499e08f0aee5be1a679f054692c94a16c3bcd63df155e-merged.mount: Deactivated successfully.
Jan 22 05:08:08 np0005591760 podman[277162]: 2026-01-22 10:08:08.486444646 +0000 UTC m=+0.388052060 container remove 95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 05:08:08 np0005591760 systemd[1]: libpod-conmon-95cc7b49dde058798d04b53320958b428ffc664ba5165a571875eb914b195fa5.scope: Deactivated successfully.
Jan 22 05:08:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:08.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:08 np0005591760 podman[277283]: 2026-01-22 10:08:08.915542541 +0000 UTC m=+0.031320426 container create 189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_newton, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:08:08 np0005591760 systemd[1]: Started libpod-conmon-189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b.scope.
Jan 22 05:08:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:08.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:08.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:08.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:08.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:08 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:08:08 np0005591760 podman[277283]: 2026-01-22 10:08:08.97345835 +0000 UTC m=+0.089236255 container init 189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_newton, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:08:08 np0005591760 podman[277283]: 2026-01-22 10:08:08.978064206 +0000 UTC m=+0.093842092 container start 189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_newton, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 05:08:08 np0005591760 podman[277283]: 2026-01-22 10:08:08.979200779 +0000 UTC m=+0.094978664 container attach 189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_newton, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:08:08 np0005591760 sad_newton[277296]: 167 167
Jan 22 05:08:08 np0005591760 systemd[1]: libpod-189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b.scope: Deactivated successfully.
Jan 22 05:08:08 np0005591760 podman[277283]: 2026-01-22 10:08:08.98168807 +0000 UTC m=+0.097465954 container died 189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 05:08:08 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fda9dbf0bf5bc0730e6a69139250dffada832a4731be1624ceeacef635f76c22-merged.mount: Deactivated successfully.
Jan 22 05:08:08 np0005591760 podman[277283]: 2026-01-22 10:08:08.903640746 +0000 UTC m=+0.019418650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:08:09 np0005591760 podman[277283]: 2026-01-22 10:08:09.001182604 +0000 UTC m=+0.116960489 container remove 189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 05:08:09 np0005591760 systemd[1]: libpod-conmon-189e74efe22b533768584df709296a0e92579ce48e6035ac5371b9704b0fea5b.scope: Deactivated successfully.
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.121427244 +0000 UTC m=+0.029581988 container create e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:08:09 np0005591760 systemd[1]: Started libpod-conmon-e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd.scope.
Jan 22 05:08:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:09.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:08:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b32cf0602a59432acee8115c2ea7e088ae4f5099ad2569e3d6354473178f4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b32cf0602a59432acee8115c2ea7e088ae4f5099ad2569e3d6354473178f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b32cf0602a59432acee8115c2ea7e088ae4f5099ad2569e3d6354473178f4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:09 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b32cf0602a59432acee8115c2ea7e088ae4f5099ad2569e3d6354473178f4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.184660631 +0000 UTC m=+0.092815374 container init e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.190385497 +0000 UTC m=+0.098540240 container start e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.191720855 +0000 UTC m=+0.099875598 container attach e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.10948898 +0000 UTC m=+0.017643723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]: {
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:    "0": [
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:        {
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "devices": [
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "/dev/loop3"
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            ],
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "lv_name": "ceph_lv0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "lv_size": "21470642176",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "name": "ceph_lv0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "tags": {
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.cluster_name": "ceph",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.crush_device_class": "",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.encrypted": "0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.osd_id": "0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.type": "block",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.vdo": "0",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:                "ceph.with_tpm": "0"
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            },
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "type": "block",
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:            "vg_name": "ceph_vg0"
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:        }
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]:    ]
Jan 22 05:08:09 np0005591760 suspicious_wiles[277330]: }
Jan 22 05:08:09 np0005591760 systemd[1]: libpod-e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd.scope: Deactivated successfully.
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.440981428 +0000 UTC m=+0.349136172 container died e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:08:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-46b32cf0602a59432acee8115c2ea7e088ae4f5099ad2569e3d6354473178f4d-merged.mount: Deactivated successfully.
Jan 22 05:08:09 np0005591760 podman[277317]: 2026-01-22 10:08:09.463463415 +0000 UTC m=+0.371618158 container remove e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_wiles, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:08:09 np0005591760 systemd[1]: libpod-conmon-e58fca5f20a4e777f324cd528928ff60b66276b96f78bfe98bfeba91e151dbdd.scope: Deactivated successfully.
Jan 22 05:08:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.878604631 +0000 UTC m=+0.030877070 container create e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_khayyam, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:08:09 np0005591760 systemd[1]: Started libpod-conmon-e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20.scope.
Jan 22 05:08:09 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.934672465 +0000 UTC m=+0.086944904 container init e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.939546637 +0000 UTC m=+0.091819066 container start e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_khayyam, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.940808506 +0000 UTC m=+0.093080926 container attach e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_khayyam, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 05:08:09 np0005591760 ecstatic_khayyam[277444]: 167 167
Jan 22 05:08:09 np0005591760 systemd[1]: libpod-e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20.scope: Deactivated successfully.
Jan 22 05:08:09 np0005591760 conmon[277444]: conmon e6dcf4702a9b3b6ec9cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20.scope/container/memory.events
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.943730437 +0000 UTC m=+0.096002865 container died e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:08:09 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e1610a5bf467f86ebec8578fec50e9994df8b6e1e0aa40dd33d3c36ae6d22b78-merged.mount: Deactivated successfully.
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.963168714 +0000 UTC m=+0.115441143 container remove e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:08:09 np0005591760 podman[277431]: 2026-01-22 10:08:09.866357695 +0000 UTC m=+0.018630124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:08:09 np0005591760 systemd[1]: libpod-conmon-e6dcf4702a9b3b6ec9cbaedc4f07a623a0676f9c473b66ee67f2af8aea7fbf20.scope: Deactivated successfully.
Jan 22 05:08:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.086153984 +0000 UTC m=+0.028706146 container create 7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 22 05:08:10 np0005591760 systemd[1]: Started libpod-conmon-7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812.scope.
Jan 22 05:08:10 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:08:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29150b453b80cbf6b76017c6c836338e0dd6b057f68605bca263e57b7c2e63d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29150b453b80cbf6b76017c6c836338e0dd6b057f68605bca263e57b7c2e63d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29150b453b80cbf6b76017c6c836338e0dd6b057f68605bca263e57b7c2e63d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:10 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29150b453b80cbf6b76017c6c836338e0dd6b057f68605bca263e57b7c2e63d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.145122357 +0000 UTC m=+0.087674530 container init 7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.150301956 +0000 UTC m=+0.092854118 container start 7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.152402946 +0000 UTC m=+0.094955110 container attach 7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.075058248 +0000 UTC m=+0.017610432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:08:10 np0005591760 nova_compute[248045]: 2026-01-22 10:08:10.605 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:10 np0005591760 lvm[277556]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:08:10 np0005591760 lvm[277556]: VG ceph_vg0 finished
Jan 22 05:08:10 np0005591760 hopeful_jackson[277480]: {}
Jan 22 05:08:10 np0005591760 lvm[277558]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:08:10 np0005591760 lvm[277558]: VG ceph_vg0 finished
Jan 22 05:08:10 np0005591760 systemd[1]: libpod-7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812.scope: Deactivated successfully.
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.670904742 +0000 UTC m=+0.613456905 container died 7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 05:08:10 np0005591760 systemd[1]: var-lib-containers-storage-overlay-29150b453b80cbf6b76017c6c836338e0dd6b057f68605bca263e57b7c2e63d5-merged.mount: Deactivated successfully.
Jan 22 05:08:10 np0005591760 podman[277467]: 2026-01-22 10:08:10.694531599 +0000 UTC m=+0.637083762 container remove 7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Jan 22 05:08:10 np0005591760 systemd[1]: libpod-conmon-7c7fba74c80c23b19f0fa0cfd406c93d4a33a8429481bb79e5ea77190b774812.scope: Deactivated successfully.
Jan 22 05:08:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:08:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:08:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:10 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:08:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:11.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:12.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:13 np0005591760 nova_compute[248045]: 2026-01-22 10:08:13.074 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:08:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:13.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:13.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:13.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:13.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:14 np0005591760 nova_compute[248045]: 2026-01-22 10:08:14.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000021s ======
Jan 22 05:08:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Jan 22 05:08:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:15 np0005591760 nova_compute[248045]: 2026-01-22 10:08:15.605 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:16.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:17.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:17.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:17.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:17.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:08:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:17] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:08:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:17] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:08:18 np0005591760 nova_compute[248045]: 2026-01-22 10:08:18.075 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:18.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:18.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:18.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:18.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:18.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:08:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:20 np0005591760 nova_compute[248045]: 2026-01-22 10:08:20.311 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:20 np0005591760 nova_compute[248045]: 2026-01-22 10:08:20.312 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 05:08:20 np0005591760 nova_compute[248045]: 2026-01-22 10:08:20.607 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:20.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:21 np0005591760 nova_compute[248045]: 2026-01-22 10:08:21.313 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:21 np0005591760 nova_compute[248045]: 2026-01-22 10:08:21.313 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:21 np0005591760 nova_compute[248045]: 2026-01-22 10:08:21.313 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:08:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.078 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:08:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:23.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.322 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.322 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.322 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.322 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.323 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:08:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:23.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:23.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:23.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:23.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:08:23 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178981254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.676 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.866 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.866 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4498MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.867 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:08:23 np0005591760 nova_compute[248045]: 2026-01-22 10:08:23.867 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.001 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.001 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.069 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing inventories for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.083 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating ProviderTree inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.083 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.096 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing aggregate associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.120 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing trait associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, traits: HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,HW_CPU_X86_AVX512VAES,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.140 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:08:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:08:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3800131224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.473 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.477 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.492 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.494 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:08:24 np0005591760 nova_compute[248045]: 2026-01-22 10:08:24.494 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:08:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:25.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:25 np0005591760 nova_compute[248045]: 2026-01-22 10:08:25.489 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:25 np0005591760 nova_compute[248045]: 2026-01-22 10:08:25.490 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:25 np0005591760 nova_compute[248045]: 2026-01-22 10:08:25.506 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:25 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 05:08:25 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 05:08:25 np0005591760 nova_compute[248045]: 2026-01-22 10:08:25.610 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:26 np0005591760 nova_compute[248045]: 2026-01-22 10:08:26.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:26 np0005591760 nova_compute[248045]: 2026-01-22 10:08:26.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 05:08:26 np0005591760 nova_compute[248045]: 2026-01-22 10:08:26.316 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 05:08:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:26.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:27 np0005591760 podman[277681]: 2026-01-22 10:08:27.063916197 +0000 UTC m=+0.047942322 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 05:08:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:27.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:08:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:27.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:08:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:08:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:08:28 np0005591760 nova_compute[248045]: 2026-01-22 10:08:28.078 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:28 np0005591760 nova_compute[248045]: 2026-01-22 10:08:28.315 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:28 np0005591760 nova_compute[248045]: 2026-01-22 10:08:28.316 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:08:28 np0005591760 nova_compute[248045]: 2026-01-22 10:08:28.316 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:08:28 np0005591760 nova_compute[248045]: 2026-01-22 10:08:28.327 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:08:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:28.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:28.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:28.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:28.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:28.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:29 np0005591760 podman[277698]: 2026-01-22 10:08:29.093400738 +0000 UTC m=+0.082839121 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 05:08:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:29.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:29 np0005591760 nova_compute[248045]: 2026-01-22 10:08:29.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0[106097]: logger=infra.usagestats t=2026-01-22T10:08:29.989458339Z level=info msg="Usage stats are ready to report"
Jan 22 05:08:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:30 np0005591760 nova_compute[248045]: 2026-01-22 10:08:30.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:30 np0005591760 nova_compute[248045]: 2026-01-22 10:08:30.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:30 np0005591760 nova_compute[248045]: 2026-01-22 10:08:30.614 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:30.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:32.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:33 np0005591760 nova_compute[248045]: 2026-01-22 10:08:33.080 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:33.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:33.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:33.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:33.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:33.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:34.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:35.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:35 np0005591760 nova_compute[248045]: 2026-01-22 10:08:35.617 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:37.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:37.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:37.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:08:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:08:38 np0005591760 nova_compute[248045]: 2026-01-22 10:08:38.084 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:08:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:08:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:38.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:38.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:38.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:38.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:40 np0005591760 nova_compute[248045]: 2026-01-22 10:08:40.620 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:08:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:40.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:08:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:42 np0005591760 nova_compute[248045]: 2026-01-22 10:08:42.279 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:08:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:42.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:43 np0005591760 nova_compute[248045]: 2026-01-22 10:08:43.085 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:43.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:43.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:43.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:43.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:43.607Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:44.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:45.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:45 np0005591760 nova_compute[248045]: 2026-01-22 10:08:45.620 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:08:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:46.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:08:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:47.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:08:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:47.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:08:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:08:47.327 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:08:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:08:47.327 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:08:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:08:47.327 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:08:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:08:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:08:48 np0005591760 nova_compute[248045]: 2026-01-22 10:08:48.087 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:48.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:48.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:48.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:48.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:48.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:08:49
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.nfs', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:08:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:08:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:50 np0005591760 nova_compute[248045]: 2026-01-22 10:08:50.623 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:51.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:53 np0005591760 nova_compute[248045]: 2026-01-22 10:08:53.089 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:53.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:53.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:53.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:53.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:53.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:08:55 np0005591760 nova_compute[248045]: 2026-01-22 10:08:55.623 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:56.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:57.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:57.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:57.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:57.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:57.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:08:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:08:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:08:58 np0005591760 podman[277801]: 2026-01-22 10:08:58.046317604 +0000 UTC m=+0.039614978 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 05:08:58 np0005591760 nova_compute[248045]: 2026-01-22 10:08:58.091 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:08:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:08:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:08:58.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:58.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:58.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:58.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:08:58.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:08:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:08:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:08:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:08:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:08:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:08:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:08:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:08:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:08:59.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:08:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:09:00 np0005591760 podman[277820]: 2026-01-22 10:09:00.065066837 +0000 UTC m=+0.059830875 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 05:09:00 np0005591760 nova_compute[248045]: 2026-01-22 10:09:00.625 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:09:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:00.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:09:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:01.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:02.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:03 np0005591760 nova_compute[248045]: 2026-01-22 10:09:03.093 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:03.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:03.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:03.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:03.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:04.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:05.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:05 np0005591760 nova_compute[248045]: 2026-01-22 10:09:05.626 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:06.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:07.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:07.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:07.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:07.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:07.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:07] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:09:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:07] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:09:08 np0005591760 nova_compute[248045]: 2026-01-22 10:09:08.094 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:08.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:08.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:08.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:08.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:08.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:09.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:10 np0005591760 nova_compute[248045]: 2026-01-22 10:09:10.627 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:10.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:11.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:12.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 22 05:09:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 nova_compute[248045]: 2026-01-22 10:09:13.095 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:13.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:09:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.443105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076553443126, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2362, "num_deletes": 503, "total_data_size": 4204020, "memory_usage": 4272328, "flush_reason": "Manual Compaction"}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076553453071, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3670111, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29925, "largest_seqno": 32286, "table_properties": {"data_size": 3660683, "index_size": 5346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 23766, "raw_average_key_size": 20, "raw_value_size": 3639361, "raw_average_value_size": 3071, "num_data_blocks": 231, "num_entries": 1185, "num_filter_entries": 1185, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769076351, "oldest_key_time": 1769076351, "file_creation_time": 1769076553, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10002 microseconds, and 6384 cpu microseconds.
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.453105) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3670111 bytes OK
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.453120) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.453457) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.453467) EVENT_LOG_v1 {"time_micros": 1769076553453464, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.453479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4193487, prev total WAL file size 4193487, number of live WAL files 2.
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.455240) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3584KB)], [68(14MB)]
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076553455263, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18482463, "oldest_snapshot_seqno": -1}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6429 keys, 12562072 bytes, temperature: kUnknown
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076553483826, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 12562072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12522049, "index_size": 22825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 168031, "raw_average_key_size": 26, "raw_value_size": 12408803, "raw_average_value_size": 1930, "num_data_blocks": 897, "num_entries": 6429, "num_filter_entries": 6429, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769076553, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.484055) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12562072 bytes
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.484582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 643.8 rd, 437.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 14.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(8.5) write-amplify(3.4) OK, records in: 7431, records dropped: 1002 output_compression: NoCompression
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.484595) EVENT_LOG_v1 {"time_micros": 1769076553484589, "job": 38, "event": "compaction_finished", "compaction_time_micros": 28707, "compaction_time_cpu_micros": 20223, "output_level": 6, "num_output_files": 1, "total_output_size": 12562072, "num_input_records": 7431, "num_output_records": 6429, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076553485379, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076553487376, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.454208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.487454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.487457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.487458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.487459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:09:13.487461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:09:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:13.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:13.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:13.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:13.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.829217009 +0000 UTC m=+0.027395210 container create 04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_johnson, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:09:13 np0005591760 systemd[1]: Started libpod-conmon-04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130.scope.
Jan 22 05:09:13 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.887048163 +0000 UTC m=+0.085226374 container init 04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:13 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.895532205 +0000 UTC m=+0.093710406 container start 04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_johnson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.897098389 +0000 UTC m=+0.095276591 container attach 04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:09:13 np0005591760 friendly_johnson[278032]: 167 167
Jan 22 05:09:13 np0005591760 systemd[1]: libpod-04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130.scope: Deactivated successfully.
Jan 22 05:09:13 np0005591760 conmon[278032]: conmon 04641e548a90abccfaee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130.scope/container/memory.events
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.901962563 +0000 UTC m=+0.100140774 container died 04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 05:09:13 np0005591760 systemd[1]: var-lib-containers-storage-overlay-750e0408036dfe32418e3bc30ef6e6e6efd195fc51cd2290aa9c0c90e8b8a34a-merged.mount: Deactivated successfully.
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.817583187 +0000 UTC m=+0.015761409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:09:13 np0005591760 podman[278019]: 2026-01-22 10:09:13.920692439 +0000 UTC m=+0.118870640 container remove 04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_johnson, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 05:09:13 np0005591760 systemd[1]: libpod-conmon-04641e548a90abccfaeed262efe3810821ea760f6b8c002d2babb1a8757c1130.scope: Deactivated successfully.
Jan 22 05:09:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.045501898 +0000 UTC m=+0.030985813 container create 5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Jan 22 05:09:14 np0005591760 systemd[1]: Started libpod-conmon-5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304.scope.
Jan 22 05:09:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:09:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bb48cca24945257b7fd49086e2fcaa7a96e89b1317f4d7edf12de077a71fbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bb48cca24945257b7fd49086e2fcaa7a96e89b1317f4d7edf12de077a71fbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bb48cca24945257b7fd49086e2fcaa7a96e89b1317f4d7edf12de077a71fbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bb48cca24945257b7fd49086e2fcaa7a96e89b1317f4d7edf12de077a71fbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:14 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bb48cca24945257b7fd49086e2fcaa7a96e89b1317f4d7edf12de077a71fbc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.105747696 +0000 UTC m=+0.091231610 container init 5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.11033042 +0000 UTC m=+0.095814334 container start 5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.111726102 +0000 UTC m=+0.097210016 container attach 5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.033272392 +0000 UTC m=+0.018756306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:09:14 np0005591760 reverent_cannon[278068]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:09:14 np0005591760 reverent_cannon[278068]: --> All data devices are unavailable
Jan 22 05:09:14 np0005591760 systemd[1]: libpod-5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304.scope: Deactivated successfully.
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.378124398 +0000 UTC m=+0.363608312 container died 5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cannon, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 05:09:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-10bb48cca24945257b7fd49086e2fcaa7a96e89b1317f4d7edf12de077a71fbc-merged.mount: Deactivated successfully.
Jan 22 05:09:14 np0005591760 podman[278056]: 2026-01-22 10:09:14.401164964 +0000 UTC m=+0.386648879 container remove 5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cannon, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:09:14 np0005591760 systemd[1]: libpod-conmon-5a91a6f7a3cfca2203cd7100dbe6233e8c46ec861bf261348eda9a632732b304.scope: Deactivated successfully.
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.808082209 +0000 UTC m=+0.026706902 container create b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_hermann, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 05:09:14 np0005591760 systemd[1]: Started libpod-conmon-b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9.scope.
Jan 22 05:09:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:09:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:09:14 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.86568363 +0000 UTC m=+0.084308323 container init b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.870972354 +0000 UTC m=+0.089597038 container start b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_hermann, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.872572232 +0000 UTC m=+0.091196915 container attach b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_hermann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 05:09:14 np0005591760 modest_hermann[278188]: 167 167
Jan 22 05:09:14 np0005591760 systemd[1]: libpod-b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9.scope: Deactivated successfully.
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.874578145 +0000 UTC m=+0.093202828 container died b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_hermann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:09:14 np0005591760 systemd[1]: var-lib-containers-storage-overlay-2ab0273ff883607f91380fbd811a970b24d5b0ddae01d017b510d73422e811f3-merged.mount: Deactivated successfully.
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.797460646 +0000 UTC m=+0.016085349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:09:14 np0005591760 podman[278174]: 2026-01-22 10:09:14.89794686 +0000 UTC m=+0.116571543 container remove b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:09:14 np0005591760 systemd[1]: libpod-conmon-b90e84689ec0e2823eb4207c8bcca1536583f661589ca41876fc9d8f945112a9.scope: Deactivated successfully.
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.023133171 +0000 UTC m=+0.030260736 container create acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:09:15 np0005591760 systemd[1]: Started libpod-conmon-acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36.scope.
Jan 22 05:09:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:09:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9932c47ce852eb3c64f79a954bf0ed7aedded1ab644b840a1f842cb11ad44537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9932c47ce852eb3c64f79a954bf0ed7aedded1ab644b840a1f842cb11ad44537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9932c47ce852eb3c64f79a954bf0ed7aedded1ab644b840a1f842cb11ad44537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:15 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9932c47ce852eb3c64f79a954bf0ed7aedded1ab644b840a1f842cb11ad44537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.083103379 +0000 UTC m=+0.090230954 container init acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.089592067 +0000 UTC m=+0.096719632 container start acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_davinci, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.090879664 +0000 UTC m=+0.098007230 container attach acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.011233296 +0000 UTC m=+0.018360871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:09:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:15.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:15 np0005591760 loving_davinci[278248]: {
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:    "0": [
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:        {
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "devices": [
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "/dev/loop3"
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            ],
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "lv_name": "ceph_lv0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "lv_size": "21470642176",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "name": "ceph_lv0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "tags": {
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.cluster_name": "ceph",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.crush_device_class": "",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.encrypted": "0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.osd_id": "0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.type": "block",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.vdo": "0",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:                "ceph.with_tpm": "0"
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            },
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "type": "block",
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:            "vg_name": "ceph_vg0"
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:        }
Jan 22 05:09:15 np0005591760 loving_davinci[278248]:    ]
Jan 22 05:09:15 np0005591760 loving_davinci[278248]: }
Jan 22 05:09:15 np0005591760 systemd[1]: libpod-acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36.scope: Deactivated successfully.
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.327493055 +0000 UTC m=+0.334620620 container died acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:09:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay-9932c47ce852eb3c64f79a954bf0ed7aedded1ab644b840a1f842cb11ad44537-merged.mount: Deactivated successfully.
Jan 22 05:09:15 np0005591760 podman[278235]: 2026-01-22 10:09:15.349689229 +0000 UTC m=+0.356816794 container remove acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_davinci, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:09:15 np0005591760 systemd[1]: libpod-conmon-acf128c113e1099c011f7cd899521582598f6197700b948d51bd2c38d2a47d36.scope: Deactivated successfully.
Jan 22 05:09:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:09:15 np0005591760 nova_compute[248045]: 2026-01-22 10:09:15.628 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.769224131 +0000 UTC m=+0.028530532 container create 16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:09:15 np0005591760 systemd[1]: Started libpod-conmon-16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561.scope.
Jan 22 05:09:15 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.813728279 +0000 UTC m=+0.073034690 container init 16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banzai, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.81787018 +0000 UTC m=+0.077176581 container start 16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.819181093 +0000 UTC m=+0.078487514 container attach 16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banzai, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:09:15 np0005591760 youthful_banzai[278363]: 167 167
Jan 22 05:09:15 np0005591760 systemd[1]: libpod-16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561.scope: Deactivated successfully.
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.821009672 +0000 UTC m=+0.080316094 container died 16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:09:15 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4f25e44264916959ae95378b7bf9da04fda45e90896239623f53ebfcaf9cbc15-merged.mount: Deactivated successfully.
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.841266338 +0000 UTC m=+0.100572728 container remove 16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 22 05:09:15 np0005591760 podman[278349]: 2026-01-22 10:09:15.75846557 +0000 UTC m=+0.017771990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:09:15 np0005591760 systemd[1]: libpod-conmon-16326913503dcc9f5a28f2e7d6318aa3d769add6be24b66698a221fdec9c8561.scope: Deactivated successfully.
Jan 22 05:09:15 np0005591760 podman[278385]: 2026-01-22 10:09:15.960810366 +0000 UTC m=+0.027862231 container create d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:09:15 np0005591760 systemd[1]: Started libpod-conmon-d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a.scope.
Jan 22 05:09:16 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:09:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07df27f21025656f5be74febb78fd89415ebc49a57148ffe134c0fc9cf0945ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07df27f21025656f5be74febb78fd89415ebc49a57148ffe134c0fc9cf0945ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07df27f21025656f5be74febb78fd89415ebc49a57148ffe134c0fc9cf0945ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:16 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07df27f21025656f5be74febb78fd89415ebc49a57148ffe134c0fc9cf0945ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:09:16 np0005591760 podman[278385]: 2026-01-22 10:09:16.027706898 +0000 UTC m=+0.094758773 container init d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 05:09:16 np0005591760 podman[278385]: 2026-01-22 10:09:16.032688854 +0000 UTC m=+0.099740709 container start d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_edison, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 05:09:16 np0005591760 podman[278385]: 2026-01-22 10:09:16.033984408 +0000 UTC m=+0.101036252 container attach d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:09:16 np0005591760 podman[278385]: 2026-01-22 10:09:15.948774786 +0000 UTC m=+0.015826661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:09:16 np0005591760 sleepy_edison[278399]: {}
Jan 22 05:09:16 np0005591760 lvm[278476]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:09:16 np0005591760 lvm[278476]: VG ceph_vg0 finished
Jan 22 05:09:16 np0005591760 systemd[1]: libpod-d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a.scope: Deactivated successfully.
Jan 22 05:09:16 np0005591760 podman[278385]: 2026-01-22 10:09:16.530201999 +0000 UTC m=+0.597253854 container died d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:09:16 np0005591760 systemd[1]: var-lib-containers-storage-overlay-07df27f21025656f5be74febb78fd89415ebc49a57148ffe134c0fc9cf0945ae-merged.mount: Deactivated successfully.
Jan 22 05:09:16 np0005591760 podman[278385]: 2026-01-22 10:09:16.553568469 +0000 UTC m=+0.620620325 container remove d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 05:09:16 np0005591760 systemd[1]: libpod-conmon-d0b34bed7ae555945ee41d86f65c5070381eb8ba434ab4b57195ff74d504693a.scope: Deactivated successfully.
Jan 22 05:09:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:09:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:16 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:09:16 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:16.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:16 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:09:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:17.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:17.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:17.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:09:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:17] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:09:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:17] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:09:18 np0005591760 nova_compute[248045]: 2026-01-22 10:09:18.098 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:18.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:18.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:18.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:18.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:09:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:09:20 np0005591760 nova_compute[248045]: 2026-01-22 10:09:20.630 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:09:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:22.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:23 np0005591760 nova_compute[248045]: 2026-01-22 10:09:23.100 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:23 np0005591760 nova_compute[248045]: 2026-01-22 10:09:23.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:23 np0005591760 nova_compute[248045]: 2026-01-22 10:09:23.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:23 np0005591760 nova_compute[248045]: 2026-01-22 10:09:23.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:09:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:09:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:23.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:23.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:23.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:23.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.314 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.314 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.314 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.314 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.315 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:09:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:09:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204141911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.645 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.841 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.842 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4495MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.842 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.842 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:09:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:24.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.885 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.886 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:09:24 np0005591760 nova_compute[248045]: 2026-01-22 10:09:24.912 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:09:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:25 np0005591760 nova_compute[248045]: 2026-01-22 10:09:25.245 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:09:25 np0005591760 nova_compute[248045]: 2026-01-22 10:09:25.249 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:09:25 np0005591760 nova_compute[248045]: 2026-01-22 10:09:25.262 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:09:25 np0005591760 nova_compute[248045]: 2026-01-22 10:09:25.263 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:09:25 np0005591760 nova_compute[248045]: 2026-01-22 10:09:25.263 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:09:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:25 np0005591760 nova_compute[248045]: 2026-01-22 10:09:25.631 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:26 np0005591760 nova_compute[248045]: 2026-01-22 10:09:26.264 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:27.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:27.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:27.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:27.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:09:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:27] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:09:28 np0005591760 nova_compute[248045]: 2026-01-22 10:09:28.101 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:28.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:28.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:28.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:28.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:29 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:29 np0005591760 podman[278569]: 2026-01-22 10:09:29.043268706 +0000 UTC m=+0.034565805 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 05:09:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:29.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:29 np0005591760 nova_compute[248045]: 2026-01-22 10:09:29.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:30 np0005591760 nova_compute[248045]: 2026-01-22 10:09:30.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:30 np0005591760 nova_compute[248045]: 2026-01-22 10:09:30.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:09:30 np0005591760 nova_compute[248045]: 2026-01-22 10:09:30.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:09:30 np0005591760 nova_compute[248045]: 2026-01-22 10:09:30.312 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:09:30 np0005591760 nova_compute[248045]: 2026-01-22 10:09:30.633 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:30.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:31 np0005591760 podman[278588]: 2026-01-22 10:09:31.064272021 +0000 UTC m=+0.057832628 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 05:09:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:31 np0005591760 nova_compute[248045]: 2026-01-22 10:09:31.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:32 np0005591760 nova_compute[248045]: 2026-01-22 10:09:32.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:09:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:33 np0005591760 nova_compute[248045]: 2026-01-22 10:09:33.105 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:33.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:33.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:33.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:33.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:34 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:35 np0005591760 nova_compute[248045]: 2026-01-22 10:09:35.635 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:36.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:37.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:37.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:37.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:09:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:09:38 np0005591760 nova_compute[248045]: 2026-01-22 10:09:38.106 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:38.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:38.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:38.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:38.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:38.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:39 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:39.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:40 np0005591760 nova_compute[248045]: 2026-01-22 10:09:40.637 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:09:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:40.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:09:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:41.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:43 np0005591760 nova_compute[248045]: 2026-01-22 10:09:43.109 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:43.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:43.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:43.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:43.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:43.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:44 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:45.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:45 np0005591760 nova_compute[248045]: 2026-01-22 10:09:45.640 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:47.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:47.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:09:47.328 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:09:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:09:47.329 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:09:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:09:47.329 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:09:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:09:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:09:48 np0005591760 nova_compute[248045]: 2026-01-22 10:09:48.111 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:48.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:48.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:48.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:48 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:48 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:48 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:49 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:49.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:09:49
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', '.nfs', 'backups', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta']
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:09:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:09:50 np0005591760 nova_compute[248045]: 2026-01-22 10:09:50.640 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:50.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:51.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:52.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:53 np0005591760 nova_compute[248045]: 2026-01-22 10:09:53.113 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:53.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:53.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:53.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:53.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:53.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:53 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:53 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:53 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:54 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:55.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:09:55 np0005591760 nova_compute[248045]: 2026-01-22 10:09:55.641 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:57.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:57.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:57.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:57.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:09:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:57.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:09:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:09:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:09:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:09:58 np0005591760 nova_compute[248045]: 2026-01-22 10:09:58.114 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:09:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:09:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:09:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:58.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:58.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:58.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:09:58.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:09:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:09:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:09:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:09:59 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:09:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:09:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:09:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:09:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:09:59.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:09:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:10:00 np0005591760 ceph-mon[74254]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Jan 22 05:10:00 np0005591760 podman[278689]: 2026-01-22 10:10:00.05742953 +0000 UTC m=+0.044976369 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 05:10:00 np0005591760 ceph-mon[74254]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Jan 22 05:10:00 np0005591760 nova_compute[248045]: 2026-01-22 10:10:00.642 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:00.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:10:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:01.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:10:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:02 np0005591760 podman[278707]: 2026-01-22 10:10:02.107593985 +0000 UTC m=+0.089782236 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 05:10:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:02.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:03 np0005591760 nova_compute[248045]: 2026-01-22 10:10:03.117 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:03.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:03.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:03.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:03.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:05 np0005591760 nova_compute[248045]: 2026-01-22 10:10:05.644 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:10:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:06.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:10:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:07.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:07.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:07] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:10:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:07] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:10:08 np0005591760 nova_compute[248045]: 2026-01-22 10:10:08.119 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:08.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:08.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:08.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:08.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:08.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:08 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:08 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:08 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:09.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:10 np0005591760 nova_compute[248045]: 2026-01-22 10:10:10.645 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:10:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:10.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:10:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:10:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:11.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:10:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:12 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:12 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:12 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:13 np0005591760 nova_compute[248045]: 2026-01-22 10:10:13.121 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:13.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:13.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:13.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:13.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:13.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:14 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:14 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:10:14 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:14.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:10:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:15 np0005591760 nova_compute[248045]: 2026-01-22 10:10:15.645 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:16 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:16 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:16 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:16.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:17.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:17.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:17.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:17.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:17.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:17] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:10:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:17] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:17 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.747880927 +0000 UTC m=+0.027713310 container create 04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:10:17 np0005591760 systemd[1]: Started libpod-conmon-04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0.scope.
Jan 22 05:10:17 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.803642537 +0000 UTC m=+0.083474940 container init 04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.808201065 +0000 UTC m=+0.088033449 container start 04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.809430916 +0000 UTC m=+0.089263298 container attach 04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:10:17 np0005591760 sad_herschel[278945]: 167 167
Jan 22 05:10:17 np0005591760 systemd[1]: libpod-04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0.scope: Deactivated successfully.
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.812702545 +0000 UTC m=+0.092534928 container died 04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:10:17 np0005591760 systemd[1]: var-lib-containers-storage-overlay-62cbe8b4d96a2339b6b48bbf4a46e9357beba2b366571f10f4167d1b72faa394-merged.mount: Deactivated successfully.
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.737036564 +0000 UTC m=+0.016868957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:10:17 np0005591760 podman[278932]: 2026-01-22 10:10:17.834457717 +0000 UTC m=+0.114290100 container remove 04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:10:17 np0005591760 systemd[1]: libpod-conmon-04d5c9a0b1645d19a478131dc9593c4e21c71f10137cae3b67b05eccfc6e2fb0.scope: Deactivated successfully.
Jan 22 05:10:17 np0005591760 podman[278968]: 2026-01-22 10:10:17.959097747 +0000 UTC m=+0.030430666 container create 5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 22 05:10:17 np0005591760 systemd[1]: Started libpod-conmon-5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde.scope.
Jan 22 05:10:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d76a6d0721c899db8dfd0868e58dac6731534b216d45aed57ef823e5c3e1b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d76a6d0721c899db8dfd0868e58dac6731534b216d45aed57ef823e5c3e1b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d76a6d0721c899db8dfd0868e58dac6731534b216d45aed57ef823e5c3e1b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d76a6d0721c899db8dfd0868e58dac6731534b216d45aed57ef823e5c3e1b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d76a6d0721c899db8dfd0868e58dac6731534b216d45aed57ef823e5c3e1b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 podman[278968]: 2026-01-22 10:10:18.012969734 +0000 UTC m=+0.084302672 container init 5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_williamson, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 22 05:10:18 np0005591760 podman[278968]: 2026-01-22 10:10:18.019635737 +0000 UTC m=+0.090968655 container start 5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:10:18 np0005591760 podman[278968]: 2026-01-22 10:10:18.020735149 +0000 UTC m=+0.092068069 container attach 5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_williamson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:10:18 np0005591760 podman[278968]: 2026-01-22 10:10:17.945514137 +0000 UTC m=+0.016847076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:10:18 np0005591760 nova_compute[248045]: 2026-01-22 10:10:18.123 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:18 np0005591760 youthful_williamson[278982]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:10:18 np0005591760 youthful_williamson[278982]: --> All data devices are unavailable
Jan 22 05:10:18 np0005591760 systemd[1]: libpod-5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde.scope: Deactivated successfully.
Jan 22 05:10:18 np0005591760 podman[278968]: 2026-01-22 10:10:18.278422567 +0000 UTC m=+0.349755486 container died 5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 22 05:10:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-f3d76a6d0721c899db8dfd0868e58dac6731534b216d45aed57ef823e5c3e1b4-merged.mount: Deactivated successfully.
Jan 22 05:10:18 np0005591760 podman[278968]: 2026-01-22 10:10:18.299631969 +0000 UTC m=+0.370964888 container remove 5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:10:18 np0005591760 systemd[1]: libpod-conmon-5d0ffaaa8251b8d23e2fd913460ed677bbe802f66aa7592dfb2ad384f123ebde.scope: Deactivated successfully.
Jan 22 05:10:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.714765441 +0000 UTC m=+0.030505717 container create d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 05:10:18 np0005591760 systemd[1]: Started libpod-conmon-d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f.scope.
Jan 22 05:10:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.768199631 +0000 UTC m=+0.083939928 container init d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.772950622 +0000 UTC m=+0.088690898 container start d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.774162467 +0000 UTC m=+0.089902744 container attach d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_rubin, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:10:18 np0005591760 elegant_rubin[279100]: 167 167
Jan 22 05:10:18 np0005591760 systemd[1]: libpod-d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f.scope: Deactivated successfully.
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.777012632 +0000 UTC m=+0.092752909 container died d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_rubin, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:10:18 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ad83f51dbc294cca8f5454233e3b5426c963cba70b05d495311561f953220d37-merged.mount: Deactivated successfully.
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.796498123 +0000 UTC m=+0.112238401 container remove d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_rubin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:10:18 np0005591760 podman[279088]: 2026-01-22 10:10:18.702610485 +0000 UTC m=+0.018350772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:10:18 np0005591760 systemd[1]: libpod-conmon-d8689cdcbfc74ed07a8c09dfdc17d2da0d81fdb76efc98029f8052e112927a0f.scope: Deactivated successfully.
Jan 22 05:10:18 np0005591760 podman[279124]: 2026-01-22 10:10:18.91708044 +0000 UTC m=+0.028713486 container create 686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:10:18 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:18 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:10:18 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:18.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:10:18 np0005591760 systemd[1]: Started libpod-conmon-686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34.scope.
Jan 22 05:10:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:18.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:18.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:18.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:18.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:18 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6a5b905b1049bd53f3cc539905d3beed41c9553995f04df0c872e45dbd7d3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6a5b905b1049bd53f3cc539905d3beed41c9553995f04df0c872e45dbd7d3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6a5b905b1049bd53f3cc539905d3beed41c9553995f04df0c872e45dbd7d3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6a5b905b1049bd53f3cc539905d3beed41c9553995f04df0c872e45dbd7d3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:18 np0005591760 podman[279124]: 2026-01-22 10:10:18.987996294 +0000 UTC m=+0.099629360 container init 686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 22 05:10:18 np0005591760 podman[279124]: 2026-01-22 10:10:18.992802698 +0000 UTC m=+0.104435744 container start 686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:10:18 np0005591760 podman[279124]: 2026-01-22 10:10:18.993893124 +0000 UTC m=+0.105526171 container attach 686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 05:10:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:19 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:19 np0005591760 podman[279124]: 2026-01-22 10:10:18.905928206 +0000 UTC m=+0.017561272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]: {
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:    "0": [
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:        {
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "devices": [
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "/dev/loop3"
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            ],
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "lv_name": "ceph_lv0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "lv_size": "21470642176",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "name": "ceph_lv0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "tags": {
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.cluster_name": "ceph",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.crush_device_class": "",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.encrypted": "0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.osd_id": "0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.type": "block",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.vdo": "0",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:                "ceph.with_tpm": "0"
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            },
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "type": "block",
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:            "vg_name": "ceph_vg0"
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:        }
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]:    ]
Jan 22 05:10:19 np0005591760 dazzling_chandrasekhar[279138]: }
Jan 22 05:10:19 np0005591760 systemd[1]: libpod-686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34.scope: Deactivated successfully.
Jan 22 05:10:19 np0005591760 podman[279124]: 2026-01-22 10:10:19.229298716 +0000 UTC m=+0.340931763 container died 686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 22 05:10:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-ec6a5b905b1049bd53f3cc539905d3beed41c9553995f04df0c872e45dbd7d3e-merged.mount: Deactivated successfully.
Jan 22 05:10:19 np0005591760 podman[279124]: 2026-01-22 10:10:19.252213806 +0000 UTC m=+0.363846852 container remove 686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chandrasekhar, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:10:19 np0005591760 systemd[1]: libpod-conmon-686fa91f8a90c9ffb2c6d9e0292b63e126ea98c669483fd515390ce528c0bc34.scope: Deactivated successfully.
Jan 22 05:10:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:19.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:10:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.689455724 +0000 UTC m=+0.029444646 container create 811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:10:19 np0005591760 systemd[1]: Started libpod-conmon-811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5.scope.
Jan 22 05:10:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.725983128 +0000 UTC m=+0.065972069 container init 811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.730653286 +0000 UTC m=+0.070642197 container start 811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.731832311 +0000 UTC m=+0.071821252 container attach 811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_noyce, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:10:19 np0005591760 quizzical_noyce[279253]: 167 167
Jan 22 05:10:19 np0005591760 systemd[1]: libpod-811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5.scope: Deactivated successfully.
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.734655766 +0000 UTC m=+0.074644687 container died 811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:10:19 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a81eb7fe90fcf0f62e550227e33217f5a5bcdc9de7bac48e6dd06a588709c9ab-merged.mount: Deactivated successfully.
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.754473663 +0000 UTC m=+0.094462584 container remove 811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 05:10:19 np0005591760 podman[279240]: 2026-01-22 10:10:19.677248892 +0000 UTC m=+0.017237833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:10:19 np0005591760 systemd[1]: libpod-conmon-811ff87b5a8ab3b7c514d9a85d9fdf9959f80d3c4047cec0c0f2e6f8c9c32fc5.scope: Deactivated successfully.
Jan 22 05:10:19 np0005591760 podman[279275]: 2026-01-22 10:10:19.8722195 +0000 UTC m=+0.027953654 container create 7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_thompson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:10:19 np0005591760 systemd[1]: Started libpod-conmon-7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c.scope.
Jan 22 05:10:19 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:10:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e8133232d04e1fdadf8cc67fa44f6456fed065c38638bd30ae2710867b103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e8133232d04e1fdadf8cc67fa44f6456fed065c38638bd30ae2710867b103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e8133232d04e1fdadf8cc67fa44f6456fed065c38638bd30ae2710867b103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:19 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e8133232d04e1fdadf8cc67fa44f6456fed065c38638bd30ae2710867b103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:10:19 np0005591760 podman[279275]: 2026-01-22 10:10:19.937369467 +0000 UTC m=+0.093103630 container init 7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 22 05:10:19 np0005591760 podman[279275]: 2026-01-22 10:10:19.94262481 +0000 UTC m=+0.098358963 container start 7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 22 05:10:19 np0005591760 podman[279275]: 2026-01-22 10:10:19.943859448 +0000 UTC m=+0.099593602 container attach 7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 22 05:10:19 np0005591760 podman[279275]: 2026-01-22 10:10:19.86045362 +0000 UTC m=+0.016187793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:10:20 np0005591760 zen_thompson[279289]: {}
Jan 22 05:10:20 np0005591760 lvm[279367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:10:20 np0005591760 lvm[279367]: VG ceph_vg0 finished
Jan 22 05:10:20 np0005591760 systemd[1]: libpod-7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c.scope: Deactivated successfully.
Jan 22 05:10:20 np0005591760 podman[279275]: 2026-01-22 10:10:20.460551677 +0000 UTC m=+0.616285829 container died 7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_thompson, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 05:10:20 np0005591760 systemd[1]: var-lib-containers-storage-overlay-484e8133232d04e1fdadf8cc67fa44f6456fed065c38638bd30ae2710867b103-merged.mount: Deactivated successfully.
Jan 22 05:10:20 np0005591760 lvm[279369]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:10:20 np0005591760 lvm[279369]: VG ceph_vg0 finished
Jan 22 05:10:20 np0005591760 podman[279275]: 2026-01-22 10:10:20.485032509 +0000 UTC m=+0.640766662 container remove 7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:10:20 np0005591760 systemd[1]: libpod-conmon-7043dd3512fe93872361891749056511efa13c1d9af44e34697763f82f2bd14c.scope: Deactivated successfully.
Jan 22 05:10:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:10:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:20 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:10:20 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:20 np0005591760 nova_compute[248045]: 2026-01-22 10:10:20.648 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:20 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:10:20 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:20 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:20 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:21.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:22 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:22 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:22 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:22.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:23 np0005591760 nova_compute[248045]: 2026-01-22 10:10:23.125 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:10:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:23.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:23.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:23.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:23.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:24 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.319 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.319 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.320 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.320 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.320 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:10:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:10:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583851642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.657 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.862 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.863 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4502MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.863 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.863 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.915 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.916 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:10:24 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:24 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:24 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:24.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:24 np0005591760 nova_compute[248045]: 2026-01-22 10:10:24.954 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:10:25 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:10:25 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922902611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:10:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:25 np0005591760 nova_compute[248045]: 2026-01-22 10:10:25.288 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:10:25 np0005591760 nova_compute[248045]: 2026-01-22 10:10:25.292 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:10:25 np0005591760 nova_compute[248045]: 2026-01-22 10:10:25.306 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:10:25 np0005591760 nova_compute[248045]: 2026-01-22 10:10:25.307 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:10:25 np0005591760 nova_compute[248045]: 2026-01-22 10:10:25.307 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:10:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:25 np0005591760 nova_compute[248045]: 2026-01-22 10:10:25.650 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:26 np0005591760 nova_compute[248045]: 2026-01-22 10:10:26.307 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:26 np0005591760 nova_compute[248045]: 2026-01-22 10:10:26.308 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:26 np0005591760 nova_compute[248045]: 2026-01-22 10:10:26.308 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:26 np0005591760 nova_compute[248045]: 2026-01-22 10:10:26.308 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:26 np0005591760 nova_compute[248045]: 2026-01-22 10:10:26.308 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:10:26 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:26 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:26 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:27.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:27.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:27.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:27 np0005591760 nova_compute[248045]: 2026-01-22 10:10:27.295 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 22 05:10:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:10:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:28 np0005591760 nova_compute[248045]: 2026-01-22 10:10:28.127 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:28 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:28 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:28 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:28.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:28.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:28.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:28.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:10:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:10:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:30 np0005591760 nova_compute[248045]: 2026-01-22 10:10:30.652 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:30 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:30 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:30 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:30.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:31 np0005591760 podman[279459]: 2026-01-22 10:10:31.065366085 +0000 UTC m=+0.054968325 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 05:10:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:31 np0005591760 nova_compute[248045]: 2026-01-22 10:10:31.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:32 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:31 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:32 np0005591760 nova_compute[248045]: 2026-01-22 10:10:32.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:32 np0005591760 nova_compute[248045]: 2026-01-22 10:10:32.302 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:10:32 np0005591760 nova_compute[248045]: 2026-01-22 10:10:32.302 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:10:32 np0005591760 nova_compute[248045]: 2026-01-22 10:10:32.341 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:10:32 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:32 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:32 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:33 np0005591760 podman[279477]: 2026-01-22 10:10:33.084255704 +0000 UTC m=+0.072217399 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 05:10:33 np0005591760 nova_compute[248045]: 2026-01-22 10:10:33.128 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:33 np0005591760 nova_compute[248045]: 2026-01-22 10:10:33.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:33 np0005591760 nova_compute[248045]: 2026-01-22 10:10:33.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:10:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:33.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:33.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:33.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:33.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:34 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:34 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:10:34 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:10:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:35.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:35 np0005591760 nova_compute[248045]: 2026-01-22 10:10:35.654 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:36 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:36 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:36 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:36 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:37.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:37.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:37.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:37.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:37.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:10:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:10:38 np0005591760 nova_compute[248045]: 2026-01-22 10:10:38.130 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:38.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:38 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:38 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:38 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:38.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:38.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:38.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:10:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:10:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:40 np0005591760 nova_compute[248045]: 2026-01-22 10:10:40.655 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:40 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:40 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:10:40 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:40.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:10:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:41.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:10:42 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:42 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:42 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:43 np0005591760 nova_compute[248045]: 2026-01-22 10:10:43.133 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:43.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:10:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:43.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:43.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:43.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:43.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:44 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:44 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:10:44 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:44.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:10:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:45.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:10:45 np0005591760 nova_compute[248045]: 2026-01-22 10:10:45.654 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:46 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:46 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:46 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:46.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:47.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:47.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:47.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:47.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:10:47.329 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:10:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:10:47.330 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:10:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:10:47.330 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:10:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:10:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:10:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:10:48 np0005591760 nova_compute[248045]: 2026-01-22 10:10:48.135 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:48.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:48.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:48.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:48.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:48 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:48 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:48 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:48.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:10:49
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.nfs', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:10:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:10:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:10:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:50 np0005591760 nova_compute[248045]: 2026-01-22 10:10:50.655 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:50 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:50 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:50 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:50.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:51.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 22 05:10:52 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:52 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:52 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:52.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:53 np0005591760 nova_compute[248045]: 2026-01-22 10:10:53.136 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:53.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:53.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:53.643Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:53.643Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:53.643Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:54 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:54 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:54 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:10:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:10:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:10:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:10:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:55.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:10:55 np0005591760 nova_compute[248045]: 2026-01-22 10:10:55.656 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:56 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:56 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:56 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:56.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:57.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:57.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:57] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:10:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:10:57] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:10:58 np0005591760 nova_compute[248045]: 2026-01-22 10:10:58.139 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:10:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:10:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:58.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:58 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:58 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:58 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:10:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:58.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:58.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:10:58.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:10:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:10:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:10:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:10:59.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:10:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:11:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:10:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:00 np0005591760 nova_compute[248045]: 2026-01-22 10:11:00.658 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:00 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:00 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:00 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:02 np0005591760 podman[279579]: 2026-01-22 10:11:02.037990416 +0000 UTC m=+0.031907640 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 05:11:02 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:02 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:02 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:02.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:03 np0005591760 nova_compute[248045]: 2026-01-22 10:11:03.141 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:03.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:03.704Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:03.704Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:03.704Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:04 np0005591760 podman[279597]: 2026-01-22 10:11:04.098162356 +0000 UTC m=+0.081504634 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 05:11:04 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:04 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:04 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:04.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:05.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:05 np0005591760 nova_compute[248045]: 2026-01-22 10:11:05.658 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:06 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:06 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:06 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:06.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:07.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:07.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:07.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:07.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:07] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:11:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:07] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:11:08 np0005591760 nova_compute[248045]: 2026-01-22 10:11:08.143 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:08 np0005591760 nova_compute[248045]: 2026-01-22 10:11:08.152 248049 DEBUG oslo_concurrency.processutils [None req-effb9288-63a0-473f-963a-bb1b4e8bdf74 12c3378977944a34b6df27af0c168a73 a894ac5b4f744f208fa506d5e8f67970 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:11:08 np0005591760 nova_compute[248045]: 2026-01-22 10:11:08.175 248049 DEBUG oslo_concurrency.processutils [None req-effb9288-63a0-473f-963a-bb1b4e8bdf74 12c3378977944a34b6df27af0c168a73 a894ac5b4f744f208fa506d5e8f67970 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:11:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:08.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:08 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:08 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:08 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:08.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:09.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:10 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:10 np0005591760 nova_compute[248045]: 2026-01-22 10:11:10.659 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:10 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:10 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:10 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:10.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:11:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:11:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:11:12.655 164103 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:52:1d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:ec:a7:e9:bb:bd'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 05:11:12 np0005591760 nova_compute[248045]: 2026-01-22 10:11:12.656 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:12 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:11:12.656 164103 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 05:11:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:13 np0005591760 nova_compute[248045]: 2026-01-22 10:11:13.144 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:13.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:13.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:13.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:13.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:14 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:15 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:15 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:15.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:15 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:11:15.657 164103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e200ec57-2c57-4374-93b1-e04a1348b8ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 05:11:15 np0005591760 nova_compute[248045]: 2026-01-22 10:11:15.659 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:17.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 22 05:11:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:17.328Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:17.328Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:17.328Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:17] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:11:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:17] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:11:18 np0005591760 nova_compute[248045]: 2026-01-22 10:11:18.146 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:18.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:18.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:18.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:18.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:11:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:19.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:11:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:11:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:19 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:20 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:20 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:20 np0005591760 nova_compute[248045]: 2026-01-22 10:11:20.660 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:21.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:21.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.696158845 +0000 UTC m=+0.029015135 container create 7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 22 05:11:21 np0005591760 systemd[1]: Started libpod-conmon-7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2.scope.
Jan 22 05:11:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.749631627 +0000 UTC m=+0.082487918 container init 7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.754672846 +0000 UTC m=+0.087529136 container start 7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shamir, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.757075046 +0000 UTC m=+0.089931337 container attach 7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shamir, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:11:21 np0005591760 blissful_shamir[279835]: 167 167
Jan 22 05:11:21 np0005591760 systemd[1]: libpod-7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2.scope: Deactivated successfully.
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.759584289 +0000 UTC m=+0.092440580 container died 7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:11:21 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e44974e81425bf4c2168d2d563c23ac18e334452aa2c7056b22b3a69493ddc50-merged.mount: Deactivated successfully.
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.780362558 +0000 UTC m=+0.113218850 container remove 7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 05:11:21 np0005591760 podman[279822]: 2026-01-22 10:11:21.68454516 +0000 UTC m=+0.017401471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:11:21 np0005591760 systemd[1]: libpod-conmon-7ebd803cc22a23b4d5f545c3a446486d2abf49afcecec6e285954b5442106ca2.scope: Deactivated successfully.
Jan 22 05:11:21 np0005591760 podman[279857]: 2026-01-22 10:11:21.900842681 +0000 UTC m=+0.029220543 container create 5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:11:21 np0005591760 systemd[1]: Started libpod-conmon-5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12.scope.
Jan 22 05:11:21 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:11:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae225feb250f35d87d76361587687a726089ed2c619b46a9b242fd4af85be8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae225feb250f35d87d76361587687a726089ed2c619b46a9b242fd4af85be8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:21 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:11:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae225feb250f35d87d76361587687a726089ed2c619b46a9b242fd4af85be8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae225feb250f35d87d76361587687a726089ed2c619b46a9b242fd4af85be8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:21 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae225feb250f35d87d76361587687a726089ed2c619b46a9b242fd4af85be8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:21 np0005591760 podman[279857]: 2026-01-22 10:11:21.957428727 +0000 UTC m=+0.085806588 container init 5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 05:11:21 np0005591760 podman[279857]: 2026-01-22 10:11:21.964582199 +0000 UTC m=+0.092960060 container start 5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lalande, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:11:21 np0005591760 podman[279857]: 2026-01-22 10:11:21.967894317 +0000 UTC m=+0.096272198 container attach 5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lalande, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 05:11:21 np0005591760 podman[279857]: 2026-01-22 10:11:21.888666838 +0000 UTC m=+0.017044709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:11:22 np0005591760 cranky_lalande[279870]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:11:22 np0005591760 cranky_lalande[279870]: --> All data devices are unavailable
Jan 22 05:11:22 np0005591760 podman[279857]: 2026-01-22 10:11:22.243115383 +0000 UTC m=+0.371493254 container died 5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:11:22 np0005591760 systemd[1]: libpod-5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12.scope: Deactivated successfully.
Jan 22 05:11:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a0ae225feb250f35d87d76361587687a726089ed2c619b46a9b242fd4af85be8-merged.mount: Deactivated successfully.
Jan 22 05:11:22 np0005591760 podman[279857]: 2026-01-22 10:11:22.265424579 +0000 UTC m=+0.393802431 container remove 5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:11:22 np0005591760 systemd[1]: libpod-conmon-5fa379d78add5ea144a006acded0e31dada4095632f0e627c3baa8df97fd7a12.scope: Deactivated successfully.
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.676144616 +0000 UTC m=+0.030179792 container create 20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:11:22 np0005591760 systemd[1]: Started libpod-conmon-20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6.scope.
Jan 22 05:11:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.717882527 +0000 UTC m=+0.071917703 container init 20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.723450579 +0000 UTC m=+0.077485755 container start 20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.724714923 +0000 UTC m=+0.078750120 container attach 20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:11:22 np0005591760 serene_galileo[279991]: 167 167
Jan 22 05:11:22 np0005591760 systemd[1]: libpod-20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6.scope: Deactivated successfully.
Jan 22 05:11:22 np0005591760 conmon[279991]: conmon 20e6dba8b5a9ab479c6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6.scope/container/memory.events
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.728173637 +0000 UTC m=+0.082208813 container died 20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 05:11:22 np0005591760 systemd[1]: var-lib-containers-storage-overlay-e3f90c1f96aa2006575c69139625fe6357cc85823046563ee7ca1b96a4f21ca4-merged.mount: Deactivated successfully.
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.746170961 +0000 UTC m=+0.100206136 container remove 20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 05:11:22 np0005591760 podman[279976]: 2026-01-22 10:11:22.66179421 +0000 UTC m=+0.015829406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:11:22 np0005591760 systemd[1]: libpod-conmon-20e6dba8b5a9ab479c6be976d70e013589cb5cab911d85eaf9806358a009d3f6.scope: Deactivated successfully.
Jan 22 05:11:22 np0005591760 podman[280013]: 2026-01-22 10:11:22.867997174 +0000 UTC m=+0.029412314 container create e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:11:22 np0005591760 systemd[1]: Started libpod-conmon-e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac.scope.
Jan 22 05:11:22 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:11:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49834a46ad2851a580f94dd795543e2e3196667b86c2feb819bc3c74121c0819/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49834a46ad2851a580f94dd795543e2e3196667b86c2feb819bc3c74121c0819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49834a46ad2851a580f94dd795543e2e3196667b86c2feb819bc3c74121c0819/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:22 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49834a46ad2851a580f94dd795543e2e3196667b86c2feb819bc3c74121c0819/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:22 np0005591760 podman[280013]: 2026-01-22 10:11:22.926808916 +0000 UTC m=+0.088224067 container init e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:11:22 np0005591760 podman[280013]: 2026-01-22 10:11:22.932418547 +0000 UTC m=+0.093833688 container start e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_davinci, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 22 05:11:22 np0005591760 podman[280013]: 2026-01-22 10:11:22.93377214 +0000 UTC m=+0.095187281 container attach e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 05:11:22 np0005591760 podman[280013]: 2026-01-22 10:11:22.856399399 +0000 UTC m=+0.017814560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:11:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:23.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]: {
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:    "0": [
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:        {
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "devices": [
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "/dev/loop3"
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            ],
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "lv_name": "ceph_lv0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "lv_size": "21470642176",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "name": "ceph_lv0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "tags": {
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.cluster_name": "ceph",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.crush_device_class": "",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.encrypted": "0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.osd_id": "0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.type": "block",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.vdo": "0",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:                "ceph.with_tpm": "0"
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            },
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "type": "block",
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:            "vg_name": "ceph_vg0"
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:        }
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]:    ]
Jan 22 05:11:23 np0005591760 intelligent_davinci[280026]: }
Jan 22 05:11:23 np0005591760 nova_compute[248045]: 2026-01-22 10:11:23.147 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:23 np0005591760 systemd[1]: libpod-e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac.scope: Deactivated successfully.
Jan 22 05:11:23 np0005591760 podman[280013]: 2026-01-22 10:11:23.161540086 +0000 UTC m=+0.322955227 container died e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_davinci, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 05:11:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-49834a46ad2851a580f94dd795543e2e3196667b86c2feb819bc3c74121c0819-merged.mount: Deactivated successfully.
Jan 22 05:11:23 np0005591760 podman[280013]: 2026-01-22 10:11:23.180299528 +0000 UTC m=+0.341714669 container remove e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_davinci, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 05:11:23 np0005591760 systemd[1]: libpod-conmon-e234cb839216f481dd07ce88c51221b4a9afa7a5eacf7e90954843db59d53fac.scope: Deactivated successfully.
Jan 22 05:11:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:11:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:23.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:23.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.604647658 +0000 UTC m=+0.030617998 container create 63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:11:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:23.610Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:23.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:23.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:23 np0005591760 systemd[1]: Started libpod-conmon-63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b.scope.
Jan 22 05:11:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:11:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.655674779 +0000 UTC m=+0.081645119 container init 63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.660113301 +0000 UTC m=+0.086083631 container start 63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.66111453 +0000 UTC m=+0.087084859 container attach 63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:11:23 np0005591760 confident_feynman[280139]: 167 167
Jan 22 05:11:23 np0005591760 systemd[1]: libpod-63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b.scope: Deactivated successfully.
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.665040384 +0000 UTC m=+0.091010714 container died 63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_feynman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:11:23 np0005591760 systemd[1]: var-lib-containers-storage-overlay-fd136a4eba597c25fc8176397dc4cc56e2b16be31e1fedd0ca0dc1a96ce8c2cb-merged.mount: Deactivated successfully.
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.683059188 +0000 UTC m=+0.109029518 container remove 63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 22 05:11:23 np0005591760 podman[280125]: 2026-01-22 10:11:23.592006567 +0000 UTC m=+0.017976917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:11:23 np0005591760 systemd[1]: libpod-conmon-63b1066059f7c85321d61c0d43c4d1c5445719586bd59940ba0da4974b764e6b.scope: Deactivated successfully.
Jan 22 05:11:23 np0005591760 podman[280160]: 2026-01-22 10:11:23.803498315 +0000 UTC m=+0.030122674 container create 424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:11:23 np0005591760 systemd[1]: Started libpod-conmon-424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873.scope.
Jan 22 05:11:23 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:11:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270c27606501469978e712755886478b7c5b79e328ddf6d7ed935d2efbe96ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270c27606501469978e712755886478b7c5b79e328ddf6d7ed935d2efbe96ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270c27606501469978e712755886478b7c5b79e328ddf6d7ed935d2efbe96ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:23 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270c27606501469978e712755886478b7c5b79e328ddf6d7ed935d2efbe96ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:11:23 np0005591760 podman[280160]: 2026-01-22 10:11:23.865494534 +0000 UTC m=+0.092118903 container init 424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 05:11:23 np0005591760 podman[280160]: 2026-01-22 10:11:23.871851323 +0000 UTC m=+0.098475683 container start 424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:11:23 np0005591760 podman[280160]: 2026-01-22 10:11:23.873161655 +0000 UTC m=+0.099786014 container attach 424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_jones, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:11:23 np0005591760 podman[280160]: 2026-01-22 10:11:23.789812302 +0000 UTC m=+0.016436681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:11:24 np0005591760 lvm[280251]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:11:24 np0005591760 lvm[280251]: VG ceph_vg0 finished
Jan 22 05:11:24 np0005591760 nice_jones[280173]: {}
Jan 22 05:11:24 np0005591760 systemd[1]: libpod-424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873.scope: Deactivated successfully.
Jan 22 05:11:24 np0005591760 podman[280160]: 2026-01-22 10:11:24.381900803 +0000 UTC m=+0.608525162 container died 424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 05:11:24 np0005591760 systemd[1]: var-lib-containers-storage-overlay-270c27606501469978e712755886478b7c5b79e328ddf6d7ed935d2efbe96ae0-merged.mount: Deactivated successfully.
Jan 22 05:11:24 np0005591760 podman[280160]: 2026-01-22 10:11:24.403289534 +0000 UTC m=+0.629913893 container remove 424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 05:11:24 np0005591760 systemd[1]: libpod-conmon-424657131d51dea2f76f72ef32b8e29e6b4479db1534af24adbfbf6cc9e61873.scope: Deactivated successfully.
Jan 22 05:11:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:11:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:24 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:11:24 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:24 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:11:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:24 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:25 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:25 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:25.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:11:25 np0005591760 nova_compute[248045]: 2026-01-22 10:11:25.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:25.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:25 np0005591760 nova_compute[248045]: 2026-01-22 10:11:25.662 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.574 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.574 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.574 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.574 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.575 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:11:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:11:26 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271804164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:11:26 np0005591760 nova_compute[248045]: 2026-01-22 10:11:26.917 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:11:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:27.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:27.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.127 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.128 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4511MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.128 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.128 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:11:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:27.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:27.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:27.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.195 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.195 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.208 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:11:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:11:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:27.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:11:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675741035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.556 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.560 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.585 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.587 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:11:27 np0005591760 nova_compute[248045]: 2026-01-22 10:11:27.587 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:11:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:27] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:11:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:27] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:11:28 np0005591760 nova_compute[248045]: 2026-01-22 10:11:28.150 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:28 np0005591760 nova_compute[248045]: 2026-01-22 10:11:28.582 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:28 np0005591760 nova_compute[248045]: 2026-01-22 10:11:28.583 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:28 np0005591760 nova_compute[248045]: 2026-01-22 10:11:28.583 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:28 np0005591760 nova_compute[248045]: 2026-01-22 10:11:28.583 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:11:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:28.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:28.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:28.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:28.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:11:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:29.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:11:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:11:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:29.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:29 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:30 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:30 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:30 np0005591760 nova_compute[248045]: 2026-01-22 10:11:30.663 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:31.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:11:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:31.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:32 np0005591760 nova_compute[248045]: 2026-01-22 10:11:32.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:32 np0005591760 nova_compute[248045]: 2026-01-22 10:11:32.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:11:32 np0005591760 nova_compute[248045]: 2026-01-22 10:11:32.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:11:32 np0005591760 nova_compute[248045]: 2026-01-22 10:11:32.317 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:11:32 np0005591760 nova_compute[248045]: 2026-01-22 10:11:32.317 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:33.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:33 np0005591760 podman[280341]: 2026-01-22 10:11:33.050849061 +0000 UTC m=+0.041862276 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 05:11:33 np0005591760 nova_compute[248045]: 2026-01-22 10:11:33.151 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:33 np0005591760 nova_compute[248045]: 2026-01-22 10:11:33.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:33.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:33.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:33.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:33.610Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:34 np0005591760 nova_compute[248045]: 2026-01-22 10:11:34.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:11:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:34 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:35 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:35 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:35.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:35 np0005591760 podman[280359]: 2026-01-22 10:11:35.063281571 +0000 UTC m=+0.056565408 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 05:11:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:35.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:35 np0005591760 nova_compute[248045]: 2026-01-22 10:11:35.665 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:37.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:37.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:37.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:37.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:37.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:37.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:11:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:11:38 np0005591760 nova_compute[248045]: 2026-01-22 10:11:38.152 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:38.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:38.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:38.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:38.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:11:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:39.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:11:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:39.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:39 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:40 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:40 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:40 np0005591760 nova_compute[248045]: 2026-01-22 10:11:40.667 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:41.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:41.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:43.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:43 np0005591760 nova_compute[248045]: 2026-01-22 10:11:43.154 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:11:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:43.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:11:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:43.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:43.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:43.625Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:43.625Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:44 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:45 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:45 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:11:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:45.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:11:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:45.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:45 np0005591760 nova_compute[248045]: 2026-01-22 10:11:45.668 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:47.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:47.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:47.138Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:47.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:47.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:11:47.331 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:11:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:11:47.332 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:11:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:11:47.332 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:11:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:11:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:11:48 np0005591760 nova_compute[248045]: 2026-01-22 10:11:48.156 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:48.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:48.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:48.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:48.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:11:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:49.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:11:49
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'volumes', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'images']
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:11:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:11:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:11:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:49 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:50 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:50 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:50 np0005591760 nova_compute[248045]: 2026-01-22 10:11:50.668 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 05:11:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3878992652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 05:11:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 05:11:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3878992652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 05:11:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:51.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:11:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:53.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:11:53 np0005591760 nova_compute[248045]: 2026-01-22 10:11:53.158 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:11:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:11:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:53.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:53.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:53.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:53.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:11:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:11:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:54 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:11:55 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:55 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:11:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:11:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:55.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:11:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:11:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:55 np0005591760 nova_compute[248045]: 2026-01-22 10:11:55.670 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:57.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:57.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:57.204Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:57.204Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:57.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:57] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 05:11:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:11:57] "GET /metrics HTTP/1.1" 200 48603 "" "Prometheus/2.51.0"
Jan 22 05:11:58 np0005591760 nova_compute[248045]: 2026-01-22 10:11:58.160 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:11:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:11:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:58.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:58.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:58.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:11:58.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:11:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:11:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:11:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:11:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:11:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:11:59.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:11:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:12:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:11:59 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:00 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:00 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:00 np0005591760 nova_compute[248045]: 2026-01-22 10:12:00.673 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:01.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:01.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:03.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:03 np0005591760 nova_compute[248045]: 2026-01-22 10:12:03.162 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:03.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:03.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:03.636Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:04 np0005591760 podman[280461]: 2026-01-22 10:12:04.045317282 +0000 UTC m=+0.038138332 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 05:12:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:04 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:05 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:05 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:05.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:05 np0005591760 nova_compute[248045]: 2026-01-22 10:12:05.673 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:06 np0005591760 podman[280479]: 2026-01-22 10:12:06.097437917 +0000 UTC m=+0.087157506 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 05:12:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:07.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:07.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:07.325Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:07.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:07] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:12:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:07] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:12:08 np0005591760 nova_compute[248045]: 2026-01-22 10:12:08.164 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:08.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:10 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:09 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:10 np0005591760 nova_compute[248045]: 2026-01-22 10:12:10.674 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:11.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:13.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:13 np0005591760 nova_compute[248045]: 2026-01-22 10:12:13.165 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:13.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 6 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 6 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:13.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:13.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:13.618Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:14 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:15.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:15 np0005591760 nova_compute[248045]: 2026-01-22 10:12:15.676 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:17.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:17.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:17.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:17.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:17.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:17] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:12:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:17] "GET /metrics HTTP/1.1" 200 48602 "" "Prometheus/2.51.0"
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:12:18 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7556 writes, 34K keys, 7556 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7556 writes, 7556 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1570 writes, 7137 keys, 1570 commit groups, 1.0 writes per commit group, ingest: 11.53 MB, 0.02 MB/s#012Interval WAL: 1570 writes, 1570 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    369.0      0.14              0.10        19    0.008       0      0       0.0       0.0#012  L6      1/0   11.98 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    469.8    401.2      0.59              0.40        18    0.033    102K    10K       0.0       0.0#012 Sum      1/0   11.98 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    378.7    394.9      0.74              0.49        37    0.020    102K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2    399.5    401.8      0.20              0.14        10    0.020     34K   3074       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    469.8    401.2      0.59              0.40        18    0.033    102K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    372.1      0.14              0.10        18    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     33.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.052, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.12 MB/s write, 0.27 GB read, 0.12 MB/s read, 0.7 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d6a5b429b0#2 capacity: 304.00 MB usage: 23.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000137 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1379,22.52 MB,7.40787%) FilterBlock(38,287.92 KB,0.0924914%) IndexBlock(38,521.14 KB,0.16741%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 05:12:18 np0005591760 nova_compute[248045]: 2026-01-22 10:12:18.168 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:18.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:18.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:18.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:18.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:19.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:12:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:12:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:20 np0005591760 nova_compute[248045]: 2026-01-22 10:12:20.679 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:21.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:23.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:23 np0005591760 nova_compute[248045]: 2026-01-22 10:12:23.170 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:23.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:23.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:23.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:23.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:25.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:25 np0005591760 podman[280652]: 2026-01-22 10:12:25.207546061 +0000 UTC m=+0.043436005 container exec 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 05:12:25 np0005591760 podman[280652]: 2026-01-22 10:12:25.293064706 +0000 UTC m=+0.128954648 container exec_died 1d9f5246394608572c002ffb5c46dc6f25d0c9afeee8f157a0adce3588f78cad (image=quay.io/ceph/ceph:v19, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:12:25 np0005591760 nova_compute[248045]: 2026-01-22 10:12:25.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:25 np0005591760 podman[280749]: 2026-01-22 10:12:25.605731878 +0000 UTC m=+0.039451160 container exec e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:12:25 np0005591760 podman[280749]: 2026-01-22 10:12:25.61502244 +0000 UTC m=+0.048741722 container exec_died e0c5df615e8d6243c8394ecca72a20b97143634a38cd5e5e91882907a4f2c2ce (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:12:25 np0005591760 nova_compute[248045]: 2026-01-22 10:12:25.680 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:25 np0005591760 podman[280835]: 2026-01-22 10:12:25.876905821 +0000 UTC m=+0.037205284 container exec 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:12:25 np0005591760 podman[280835]: 2026-01-22 10:12:25.901971516 +0000 UTC m=+0.062270980 container exec_died 60e09ec9c4a13cb3d02bf7b4ef5c23c1ddc5c2865d71455b2eb2ad9fedeb27f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:12:26 np0005591760 podman[280892]: 2026-01-22 10:12:26.051968742 +0000 UTC m=+0.036393632 container exec 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 05:12:26 np0005591760 podman[280892]: 2026-01-22 10:12:26.182061507 +0000 UTC m=+0.166486387 container exec_died 36c419822c66f3be9066aeffebdce5649d6c3f75d60fecaa4b80b43bc95b8da0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 22 05:12:26 np0005591760 podman[280949]: 2026-01-22 10:12:26.327617325 +0000 UTC m=+0.035707568 container exec fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 05:12:26 np0005591760 podman[280968]: 2026-01-22 10:12:26.391868408 +0000 UTC m=+0.047298751 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 05:12:26 np0005591760 podman[280949]: 2026-01-22 10:12:26.394690389 +0000 UTC m=+0.102780612 container exec_died fdd1ab5a09a62f89e7c6f88ec9cddb16220d72f1310ce1a8b35f73739b11aac9 (image=quay.io/ceph/haproxy:2.3, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-haproxy-rgw-default-compute-0-duivti)
Jan 22 05:12:26 np0005591760 podman[281003]: 2026-01-22 10:12:26.534794325 +0000 UTC m=+0.036560918 container exec 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public)
Jan 22 05:12:26 np0005591760 podman[281003]: 2026-01-22 10:12:26.552964595 +0000 UTC m=+0.054731168 container exec_died 120db73083ec94a0e5df1b79f9e2a57da84217879a768a782fbfb5806ae5a329 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-keepalived-rgw-default-compute-0-idkctu, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, version=2.2.4, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Jan 22 05:12:26 np0005591760 podman[281054]: 2026-01-22 10:12:26.699970359 +0000 UTC m=+0.039358895 container exec a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:12:26 np0005591760 podman[281054]: 2026-01-22 10:12:26.726951106 +0000 UTC m=+0.066339643 container exec_died a7c7bf92b583995a1be7992606755c2b7a86fd01ed554292f92464437b317132 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 22 05:12:26 np0005591760 podman[281102]: 2026-01-22 10:12:26.83656143 +0000 UTC m=+0.033824476 container exec 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 05:12:26 np0005591760 podman[281102]: 2026-01-22 10:12:26.847939229 +0000 UTC m=+0.045202255 container exec_died 807fd1c66fa47b1f282c2487140a988a20428cb5ae8b3725296c3c9838386c77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 22 05:12:26 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:27.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:27.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:27.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.296 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.310 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.326 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.326 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.326 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.327 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.327 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:12:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:12:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:27] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:12:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087184043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.684 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.912 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.913 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4502MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.913 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:12:27 np0005591760 nova_compute[248045]: 2026-01-22 10:12:27.914 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:12:27 np0005591760 podman[281342]: 2026-01-22 10:12:27.974327117 +0000 UTC m=+0.028627364 container create d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:28 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:12:28 np0005591760 systemd[1]: Started libpod-conmon-d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf.scope.
Jan 22 05:12:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:12:28 np0005591760 podman[281342]: 2026-01-22 10:12:28.040590293 +0000 UTC m=+0.094890541 container init d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:12:28 np0005591760 podman[281342]: 2026-01-22 10:12:28.04546139 +0000 UTC m=+0.099761638 container start d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_turing, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:12:28 np0005591760 podman[281342]: 2026-01-22 10:12:28.047040769 +0000 UTC m=+0.101341017 container attach d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:12:28 np0005591760 sharp_turing[281356]: 167 167
Jan 22 05:12:28 np0005591760 systemd[1]: libpod-d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf.scope: Deactivated successfully.
Jan 22 05:12:28 np0005591760 podman[281342]: 2026-01-22 10:12:28.050023335 +0000 UTC m=+0.104323583 container died d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_turing, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:12:28 np0005591760 podman[281342]: 2026-01-22 10:12:27.962756723 +0000 UTC m=+0.017056981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:12:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b3b6226cd6f90e3e859724e5621840f5e4aab3bb66b49852becf9d906cda287a-merged.mount: Deactivated successfully.
Jan 22 05:12:28 np0005591760 podman[281342]: 2026-01-22 10:12:28.069253774 +0000 UTC m=+0.123554022 container remove d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_turing, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:12:28 np0005591760 systemd[1]: libpod-conmon-d92fa5978b2fe460aa26530148642abfef7b60639c027eb1681eb4a53c0a81cf.scope: Deactivated successfully.
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.171 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:28 np0005591760 podman[281377]: 2026-01-22 10:12:28.196646726 +0000 UTC m=+0.032256488 container create 51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_leakey, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:12:28 np0005591760 systemd[1]: Started libpod-conmon-51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17.scope.
Jan 22 05:12:28 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:12:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ee87c6c91bc722f076745eb61ce961baf8178e7acb5bc8dd75c480ec7c002f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ee87c6c91bc722f076745eb61ce961baf8178e7acb5bc8dd75c480ec7c002f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ee87c6c91bc722f076745eb61ce961baf8178e7acb5bc8dd75c480ec7c002f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ee87c6c91bc722f076745eb61ce961baf8178e7acb5bc8dd75c480ec7c002f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:28 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ee87c6c91bc722f076745eb61ce961baf8178e7acb5bc8dd75c480ec7c002f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:28 np0005591760 podman[281377]: 2026-01-22 10:12:28.266441204 +0000 UTC m=+0.102050966 container init 51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:12:28 np0005591760 podman[281377]: 2026-01-22 10:12:28.272734042 +0000 UTC m=+0.108343794 container start 51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 22 05:12:28 np0005591760 podman[281377]: 2026-01-22 10:12:28.273862541 +0000 UTC m=+0.109472293 container attach 51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_leakey, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Jan 22 05:12:28 np0005591760 podman[281377]: 2026-01-22 10:12:28.184429885 +0000 UTC m=+0.020039647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.356 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.357 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.371 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:12:28 np0005591760 festive_leakey[281390]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:12:28 np0005591760 festive_leakey[281390]: --> All data devices are unavailable
Jan 22 05:12:28 np0005591760 systemd[1]: libpod-51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17.scope: Deactivated successfully.
Jan 22 05:12:28 np0005591760 podman[281425]: 2026-01-22 10:12:28.57349031 +0000 UTC m=+0.019394800 container died 51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 05:12:28 np0005591760 systemd[1]: var-lib-containers-storage-overlay-56ee87c6c91bc722f076745eb61ce961baf8178e7acb5bc8dd75c480ec7c002f-merged.mount: Deactivated successfully.
Jan 22 05:12:28 np0005591760 podman[281425]: 2026-01-22 10:12:28.594606667 +0000 UTC m=+0.040511138 container remove 51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:12:28 np0005591760 systemd[1]: libpod-conmon-51fa68e74c4713df6aef3c22c8f7c42c7d6f60f493f6f92f5c2a7678c949df17.scope: Deactivated successfully.
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:12:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2588853446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.719 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.724 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.740 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.742 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:12:28 np0005591760 nova_compute[248045]: 2026-01-22 10:12:28.742 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:28.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:28.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:28.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:28.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.039930018 +0000 UTC m=+0.033224544 container create 47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:12:29 np0005591760 systemd[1]: Started libpod-conmon-47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3.scope.
Jan 22 05:12:29 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:12:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.096589773 +0000 UTC m=+0.089884299 container init 47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mcclintock, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.101885742 +0000 UTC m=+0.095180258 container start 47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mcclintock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.103845688 +0000 UTC m=+0.097140214 container attach 47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 05:12:29 np0005591760 friendly_mcclintock[281537]: 167 167
Jan 22 05:12:29 np0005591760 systemd[1]: libpod-47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3.scope: Deactivated successfully.
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.105847043 +0000 UTC m=+0.099141559 container died 47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 05:12:29 np0005591760 systemd[1]: var-lib-containers-storage-overlay-00c8da2d52a4e4c9c0066c1502c1d5a70e62c6cd561243b2c986511fb0b74b42-merged.mount: Deactivated successfully.
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.027472433 +0000 UTC m=+0.020766969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:12:29 np0005591760 podman[281523]: 2026-01-22 10:12:29.136542286 +0000 UTC m=+0.129836802 container remove 47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Jan 22 05:12:29 np0005591760 systemd[1]: libpod-conmon-47a4e5827b6fc3dc61f28687ac9c556a3fffb74da6f7ef583ecbce16bc1b66d3.scope: Deactivated successfully.
Jan 22 05:12:29 np0005591760 podman[281559]: 2026-01-22 10:12:29.279924686 +0000 UTC m=+0.035199510 container create b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mendel, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:12:29 np0005591760 systemd[1]: Started libpod-conmon-b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9.scope.
Jan 22 05:12:29 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:12:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc36fd340fb73950be27517f3ed1c894cfe08ab3a0d347b8cf8a21483416d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc36fd340fb73950be27517f3ed1c894cfe08ab3a0d347b8cf8a21483416d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc36fd340fb73950be27517f3ed1c894cfe08ab3a0d347b8cf8a21483416d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:29 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc36fd340fb73950be27517f3ed1c894cfe08ab3a0d347b8cf8a21483416d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:29 np0005591760 podman[281559]: 2026-01-22 10:12:29.346812561 +0000 UTC m=+0.102087395 container init b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mendel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:12:29 np0005591760 podman[281559]: 2026-01-22 10:12:29.353115669 +0000 UTC m=+0.108390483 container start b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 05:12:29 np0005591760 podman[281559]: 2026-01-22 10:12:29.354442 +0000 UTC m=+0.109716815 container attach b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 05:12:29 np0005591760 podman[281559]: 2026-01-22 10:12:29.265890626 +0000 UTC m=+0.021165450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:12:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]: {
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:    "0": [
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:        {
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "devices": [
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "/dev/loop3"
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            ],
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "lv_name": "ceph_lv0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "lv_size": "21470642176",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "name": "ceph_lv0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "tags": {
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.cluster_name": "ceph",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.crush_device_class": "",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.encrypted": "0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.osd_id": "0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.type": "block",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.vdo": "0",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:                "ceph.with_tpm": "0"
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            },
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "type": "block",
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:            "vg_name": "ceph_vg0"
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:        }
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]:    ]
Jan 22 05:12:29 np0005591760 priceless_mendel[281572]: }
Jan 22 05:12:29 np0005591760 systemd[1]: libpod-b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9.scope: Deactivated successfully.
Jan 22 05:12:29 np0005591760 conmon[281572]: conmon b62550f634dcf6425ecf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9.scope/container/memory.events
Jan 22 05:12:29 np0005591760 podman[281581]: 2026-01-22 10:12:29.632989391 +0000 UTC m=+0.019541246 container died b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:12:29 np0005591760 systemd[1]: var-lib-containers-storage-overlay-4ccc36fd340fb73950be27517f3ed1c894cfe08ab3a0d347b8cf8a21483416d3-merged.mount: Deactivated successfully.
Jan 22 05:12:29 np0005591760 podman[281581]: 2026-01-22 10:12:29.656742752 +0000 UTC m=+0.043294607 container remove b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:12:29 np0005591760 systemd[1]: libpod-conmon-b62550f634dcf6425ecf81854117a2e92b37d5cbe0f6226fc031ae3d4c3b14e9.scope: Deactivated successfully.
Jan 22 05:12:29 np0005591760 nova_compute[248045]: 2026-01-22 10:12:29.732 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:29 np0005591760 nova_compute[248045]: 2026-01-22 10:12:29.733 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:29 np0005591760 nova_compute[248045]: 2026-01-22 10:12:29.734 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:29 np0005591760 nova_compute[248045]: 2026-01-22 10:12:29.734 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.070988979 +0000 UTC m=+0.027972650 container create e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 22 05:12:30 np0005591760 systemd[1]: Started libpod-conmon-e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75.scope.
Jan 22 05:12:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.127401968 +0000 UTC m=+0.084385637 container init e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_austin, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.13164478 +0000 UTC m=+0.088628450 container start e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_austin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.132792134 +0000 UTC m=+0.089775803 container attach e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_austin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:12:30 np0005591760 awesome_austin[281688]: 167 167
Jan 22 05:12:30 np0005591760 systemd[1]: libpod-e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75.scope: Deactivated successfully.
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.135130414 +0000 UTC m=+0.092114074 container died e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_austin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:12:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-db5a1946c451a64463b6e13f9918f3d2a649fd1df76f99325d83a432636d1fe4-merged.mount: Deactivated successfully.
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.153234099 +0000 UTC m=+0.110217769 container remove e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_austin, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:12:30 np0005591760 podman[281675]: 2026-01-22 10:12:30.060500798 +0000 UTC m=+0.017484488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:12:30 np0005591760 systemd[1]: libpod-conmon-e3103ecc1b3141d5b0961cb8f9d399d3bf73ec93eb3aecdfc1fcdc7ff792db75.scope: Deactivated successfully.
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.274289208 +0000 UTC m=+0.027638380 container create afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 05:12:30 np0005591760 systemd[1]: Started libpod-conmon-afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053.scope.
Jan 22 05:12:30 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:12:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/178c7480960b4d82072e07b0828056f4a0866da7e4f5ef1ba2e6c46a189651bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/178c7480960b4d82072e07b0828056f4a0866da7e4f5ef1ba2e6c46a189651bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/178c7480960b4d82072e07b0828056f4a0866da7e4f5ef1ba2e6c46a189651bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:30 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/178c7480960b4d82072e07b0828056f4a0866da7e4f5ef1ba2e6c46a189651bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.332769615 +0000 UTC m=+0.086118777 container init afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.337274382 +0000 UTC m=+0.090623534 container start afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.338433428 +0000 UTC m=+0.091782589 container attach afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.263136593 +0000 UTC m=+0.016485786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:12:30 np0005591760 nova_compute[248045]: 2026-01-22 10:12:30.681 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:30 np0005591760 bold_herschel[281723]: {}
Jan 22 05:12:30 np0005591760 lvm[281800]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:12:30 np0005591760 lvm[281800]: VG ceph_vg0 finished
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.793260965 +0000 UTC m=+0.546610127 container died afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 22 05:12:30 np0005591760 systemd[1]: libpod-afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053.scope: Deactivated successfully.
Jan 22 05:12:30 np0005591760 systemd[1]: var-lib-containers-storage-overlay-178c7480960b4d82072e07b0828056f4a0866da7e4f5ef1ba2e6c46a189651bf-merged.mount: Deactivated successfully.
Jan 22 05:12:30 np0005591760 podman[281710]: 2026-01-22 10:12:30.813728929 +0000 UTC m=+0.567078091 container remove afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 22 05:12:30 np0005591760 systemd[1]: libpod-conmon-afb5edb968f8423b5bfd8f657d700610c2488d5d3c396df748ce0255b865a053.scope: Deactivated successfully.
Jan 22 05:12:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:12:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:30 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:12:30 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:12:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:31.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:12:32 np0005591760 nova_compute[248045]: 2026-01-22 10:12:32.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:32 np0005591760 nova_compute[248045]: 2026-01-22 10:12:32.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:12:32 np0005591760 nova_compute[248045]: 2026-01-22 10:12:32.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:12:32 np0005591760 nova_compute[248045]: 2026-01-22 10:12:32.313 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:32 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:32 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:32 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:33.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:33 np0005591760 nova_compute[248045]: 2026-01-22 10:12:33.173 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:33 np0005591760 nova_compute[248045]: 2026-01-22 10:12:33.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:33.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:33.604Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:33.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:33.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:33.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:34 np0005591760 nova_compute[248045]: 2026-01-22 10:12:34.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:34 np0005591760 nova_compute[248045]: 2026-01-22 10:12:34.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:12:35 np0005591760 podman[281840]: 2026-01-22 10:12:35.04942954 +0000 UTC m=+0.042616778 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:12:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:35.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:12:35 np0005591760 nova_compute[248045]: 2026-01-22 10:12:35.683 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:37 np0005591760 podman[281883]: 2026-01-22 10:12:37.066434775 +0000 UTC m=+0.058693461 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 05:12:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:37.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:37.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:37.131Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:37.131Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:37.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:12:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:12:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:37] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:38 np0005591760 nova_compute[248045]: 2026-01-22 10:12:38.175 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:38.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:38.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:38.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:38.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:39.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:40 np0005591760 nova_compute[248045]: 2026-01-22 10:12:40.683 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:41.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:41.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:43 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:43.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:43 np0005591760 nova_compute[248045]: 2026-01-22 10:12:43.177 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:43.605Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:43.613Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:43.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:43.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.622127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076763622148, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2375, "num_deletes": 508, "total_data_size": 4173864, "memory_usage": 4247136, "flush_reason": "Manual Compaction"}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076763629832, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 4036429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32288, "largest_seqno": 34661, "table_properties": {"data_size": 4026351, "index_size": 5932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 23679, "raw_average_key_size": 19, "raw_value_size": 4004064, "raw_average_value_size": 3284, "num_data_blocks": 257, "num_entries": 1219, "num_filter_entries": 1219, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769076553, "oldest_key_time": 1769076553, "file_creation_time": 1769076763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7726 microseconds, and 5640 cpu microseconds.
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.629854) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 4036429 bytes OK
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.629864) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.630186) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.630195) EVENT_LOG_v1 {"time_micros": 1769076763630192, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.630204) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 4163259, prev total WAL file size 4163259, number of live WAL files 2.
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.630826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(3941KB)], [71(11MB)]
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076763630843, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16598501, "oldest_snapshot_seqno": -1}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6617 keys, 14343102 bytes, temperature: kUnknown
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076763656342, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14343102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14299935, "index_size": 25519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 173888, "raw_average_key_size": 26, "raw_value_size": 14181427, "raw_average_value_size": 2143, "num_data_blocks": 1005, "num_entries": 6617, "num_filter_entries": 6617, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769074336, "oldest_key_time": 0, "file_creation_time": 1769076763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "804a04cf-10ce-4c4c-aa43-09122b4af995", "db_session_id": "7MI1YN0I5S0TQSJVCTNU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.656440) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14343102 bytes
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.657455) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 650.4 rd, 562.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7648, records dropped: 1031 output_compression: NoCompression
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.657467) EVENT_LOG_v1 {"time_micros": 1769076763657462, "job": 40, "event": "compaction_finished", "compaction_time_micros": 25520, "compaction_time_cpu_micros": 20309, "output_level": 6, "num_output_files": 1, "total_output_size": 14343102, "num_input_records": 7648, "num_output_records": 6617, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076763657965, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769076763659474, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.630776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.659508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.659511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.659512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.659514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: rocksdb: (Original Log Time 2026/01/22-10:12:43.659515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 05:12:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:45.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:45 np0005591760 nova_compute[248045]: 2026-01-22 10:12:45.684 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:47.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:47.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:47.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:47.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:12:47.333 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:12:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:12:47.334 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:12:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:12:47.334 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:12:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:47.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:12:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:47] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:48 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:48 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:48 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:48 np0005591760 nova_compute[248045]: 2026-01-22 10:12:48.178 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-crash-compute-0[79369]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:48.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:48.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:48.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:48.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.002000021s ======
Jan 22 05:12:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:49.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:12:49
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['volumes', 'images', 'backups', '.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', '.nfs', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:12:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:49.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:12:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:12:50 np0005591760 nova_compute[248045]: 2026-01-22 10:12:50.685 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:12:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:12:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:53 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:53 np0005591760 nova_compute[248045]: 2026-01-22 10:12:53.180 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:12:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:12:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:53.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:53.613Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:53.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:53.614Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:55.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:12:55 np0005591760 nova_compute[248045]: 2026-01-22 10:12:55.685 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:57.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:57.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:57.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:57.138Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:57.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:12:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:12:57] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:12:58 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:12:58 np0005591760 nova_compute[248045]: 2026-01-22 10:12:58.182 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:12:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:58.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:58.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:58.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:12:58.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:12:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:12:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:12:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:12:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:12:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:12:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:13:00 np0005591760 nova_compute[248045]: 2026-01-22 10:13:00.687 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:01.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:01.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:03 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:03.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:03 np0005591760 nova_compute[248045]: 2026-01-22 10:13:03.183 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:03.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:03.607Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:03.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:03.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:03.617Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:13:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:05.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:13:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:05 np0005591760 nova_compute[248045]: 2026-01-22 10:13:05.688 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:06 np0005591760 podman[281961]: 2026-01-22 10:13:06.048836839 +0000 UTC m=+0.042126074 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 05:13:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:07.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:07.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:07.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:07.138Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:13:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:07.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:13:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:07] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:13:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:07] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:08 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:08 np0005591760 podman[281980]: 2026-01-22 10:13:08.066544959 +0000 UTC m=+0.061256876 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 22 05:13:08 np0005591760 nova_compute[248045]: 2026-01-22 10:13:08.184 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:08.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:08.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:08.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:08.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:09.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:09.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:10 np0005591760 nova_compute[248045]: 2026-01-22 10:13:10.689 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:13 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:13.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:13 np0005591760 nova_compute[248045]: 2026-01-22 10:13:13.185 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:13.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:13.617Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:13.618Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:13.618Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:15 np0005591760 nova_compute[248045]: 2026-01-22 10:13:15.691 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:17.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:17.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:17.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:17.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:17 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:17 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:17 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:17.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:17 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:17 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:17] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:13:17 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:17] "GET /metrics HTTP/1.1" 200 48599 "" "Prometheus/2.51.0"
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:17 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:17 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:17 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:18 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:18 np0005591760 nova_compute[248045]: 2026-01-22 10:13:18.188 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:18 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:18.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:18.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:18.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:18 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:18.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:13:19 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:19 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:19 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:19 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:20 np0005591760 nova_compute[248045]: 2026-01-22 10:13:20.694 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:21 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:21 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:21 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:21 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:22 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:22 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:23 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:23.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:23 np0005591760 nova_compute[248045]: 2026-01-22 10:13:23.190 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:23 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:23 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:23 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:23 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:23.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:23.620Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:23.620Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:23 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:23.622Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:23 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:25.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:25 np0005591760 nova_compute[248045]: 2026-01-22 10:13:25.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:25 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:25 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:25 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:25 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:25 np0005591760 nova_compute[248045]: 2026-01-22 10:13:25.696 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:27.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:27.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:27.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:27.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:27.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.299 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.320 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.320 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.320 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.320 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.321 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:13:27 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:27 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 05:13:27 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:27.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 05:13:27 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:27 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:27] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:13:27 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:27] "GET /metrics HTTP/1.1" 200 48598 "" "Prometheus/2.51.0"
Jan 22 05:13:27 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:13:27 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/318186594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.683 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.927 248049 WARNING nova.virt.libvirt.driver [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.928 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4526MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.928 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:13:27 np0005591760 nova_compute[248045]: 2026-01-22 10:13:27.929 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:27 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:28 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.081 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.081 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.094 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing inventories for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.147 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating ProviderTree inventory for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.148 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Updating inventory in ProviderTree for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.162 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing aggregate associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.176 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Refreshing trait associations for resource provider 2b3e95f6-2954-4361-8d92-e808c4373b7f, traits: HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,HW_CPU_X86_AVX512VAES,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.185 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.204 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 22 05:13:28 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254825776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.539 248049 DEBUG oslo_concurrency.processutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.543 248049 DEBUG nova.compute.provider_tree [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed in ProviderTree for provider: 2b3e95f6-2954-4361-8d92-e808c4373b7f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.556 248049 DEBUG nova.scheduler.client.report [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Inventory has not changed for provider 2b3e95f6-2954-4361-8d92-e808c4373b7f based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.557 248049 DEBUG nova.compute.resource_tracker [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.557 248049 DEBUG oslo_concurrency.lockutils [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:13:28 np0005591760 nova_compute[248045]: 2026-01-22 10:13:28.558 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:28 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:28.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:28.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:28.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:28 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:28.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:29.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:29 np0005591760 nova_compute[248045]: 2026-01-22 10:13:29.307 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:29 np0005591760 nova_compute[248045]: 2026-01-22 10:13:29.307 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:29 np0005591760 nova_compute[248045]: 2026-01-22 10:13:29.307 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 05:13:29 np0005591760 nova_compute[248045]: 2026-01-22 10:13:29.307 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:29 np0005591760 nova_compute[248045]: 2026-01-22 10:13:29.308 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 05:13:29 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:29 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:29 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:29.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:29 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:13:29 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 46K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4437 syncs, 3.01 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 708 writes, 1064 keys, 708 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 708 writes, 354 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 05:13:30 np0005591760 nova_compute[248045]: 2026-01-22 10:13:30.698 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:31 np0005591760 nova_compute[248045]: 2026-01-22 10:13:31.316 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:31 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:31 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:31 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:31.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:31 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:31 np0005591760 ceph-mon[74254]: from='mgr.14664 192.168.122.100:0/1879606959' entity='mgr.compute-0.rfmoog' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.106319265 +0000 UTC m=+0.028402243 container create 888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bhaskara, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:13:32 np0005591760 systemd[1]: Started libpod-conmon-888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8.scope.
Jan 22 05:13:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.16174609 +0000 UTC m=+0.083829077 container init 888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bhaskara, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.168413938 +0000 UTC m=+0.090496915 container start 888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.169519325 +0000 UTC m=+0.091602322 container attach 888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:13:32 np0005591760 lucid_bhaskara[282271]: 167 167
Jan 22 05:13:32 np0005591760 systemd[1]: libpod-888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8.scope: Deactivated successfully.
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.173917962 +0000 UTC m=+0.096000939 container died 888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bhaskara, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 05:13:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a12e44b9dd1243276acef550ac8ed5a4d2a39c62060d6534e0de2c1492871e27-merged.mount: Deactivated successfully.
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.095046259 +0000 UTC m=+0.017129247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:13:32 np0005591760 podman[282258]: 2026-01-22 10:13:32.194585923 +0000 UTC m=+0.116668900 container remove 888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 05:13:32 np0005591760 systemd[1]: libpod-conmon-888741bc32d8d196de911b98801fee51544eb8b5463fed391fb7df44586573f8.scope: Deactivated successfully.
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.351373637 +0000 UTC m=+0.042650783 container create 9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:13:32 np0005591760 systemd[1]: Started libpod-conmon-9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d.scope.
Jan 22 05:13:32 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:13:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ed9a96e6763d959a694a6b867007190ea8dada5d82b6810c117faa2881d1b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ed9a96e6763d959a694a6b867007190ea8dada5d82b6810c117faa2881d1b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ed9a96e6763d959a694a6b867007190ea8dada5d82b6810c117faa2881d1b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ed9a96e6763d959a694a6b867007190ea8dada5d82b6810c117faa2881d1b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:32 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ed9a96e6763d959a694a6b867007190ea8dada5d82b6810c117faa2881d1b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.425132204 +0000 UTC m=+0.116409360 container init 9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.333848375 +0000 UTC m=+0.025125531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.433928969 +0000 UTC m=+0.125206115 container start 9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_easley, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.435998524 +0000 UTC m=+0.127275680 container attach 9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:13:32 np0005591760 cranky_easley[282306]: --> passed data devices: 0 physical, 1 LVM
Jan 22 05:13:32 np0005591760 cranky_easley[282306]: --> All data devices are unavailable
Jan 22 05:13:32 np0005591760 systemd[1]: libpod-9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d.scope: Deactivated successfully.
Jan 22 05:13:32 np0005591760 conmon[282306]: conmon 9be1c602020a270afb4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d.scope/container/memory.events
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.736760558 +0000 UTC m=+0.428037704 container died 9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:13:32 np0005591760 systemd[1]: var-lib-containers-storage-overlay-c5ed9a96e6763d959a694a6b867007190ea8dada5d82b6810c117faa2881d1b2-merged.mount: Deactivated successfully.
Jan 22 05:13:32 np0005591760 podman[282293]: 2026-01-22 10:13:32.764577645 +0000 UTC m=+0.455854791 container remove 9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_easley, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:13:32 np0005591760 systemd[1]: libpod-conmon-9be1c602020a270afb4e6d4847750a4b476d054da006cdb75713ed2300a5679d.scope: Deactivated successfully.
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:32 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:33 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:33 np0005591760 nova_compute[248045]: 2026-01-22 10:13:33.207 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.260187215 +0000 UTC m=+0.035296209 container create aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 22 05:13:33 np0005591760 systemd[1]: Started libpod-conmon-aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85.scope.
Jan 22 05:13:33 np0005591760 nova_compute[248045]: 2026-01-22 10:13:33.301 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:33 np0005591760 nova_compute[248045]: 2026-01-22 10:13:33.301 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 05:13:33 np0005591760 nova_compute[248045]: 2026-01-22 10:13:33.302 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 05:13:33 np0005591760 nova_compute[248045]: 2026-01-22 10:13:33.317 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 05:13:33 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.328321631 +0000 UTC m=+0.103430635 container init aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ardinghelli, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.333697443 +0000 UTC m=+0.108806436 container start aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ardinghelli, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.335282503 +0000 UTC m=+0.110391497 container attach aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:13:33 np0005591760 interesting_ardinghelli[282425]: 167 167
Jan 22 05:13:33 np0005591760 systemd[1]: libpod-aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85.scope: Deactivated successfully.
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.33881117 +0000 UTC m=+0.113920164 container died aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ardinghelli, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.246025517 +0000 UTC m=+0.021134521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:13:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-b1a5eeb281937c0874785e0f8fd2107818aa730e61f055c9e8cc60f84949b750-merged.mount: Deactivated successfully.
Jan 22 05:13:33 np0005591760 podman[282412]: 2026-01-22 10:13:33.360286905 +0000 UTC m=+0.135395899 container remove aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ardinghelli, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 05:13:33 np0005591760 systemd[1]: libpod-conmon-aefc1b5997cba54e8d6bbacf4612639f930e6d9803496cfc6208ba310a35ff85.scope: Deactivated successfully.
Jan 22 05:13:33 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:33 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:33 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:33.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.509680859 +0000 UTC m=+0.042539883 container create 4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shtern, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 05:13:33 np0005591760 systemd[1]: Started libpod-conmon-4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a.scope.
Jan 22 05:13:33 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:13:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ebe9e4ec7220b7db05cc282d5d695c8fd915b76539a0c46f307a02e2d9f03b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ebe9e4ec7220b7db05cc282d5d695c8fd915b76539a0c46f307a02e2d9f03b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ebe9e4ec7220b7db05cc282d5d695c8fd915b76539a0c46f307a02e2d9f03b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:33 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ebe9e4ec7220b7db05cc282d5d695c8fd915b76539a0c46f307a02e2d9f03b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.579513691 +0000 UTC m=+0.112372705 container init 4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.585995589 +0000 UTC m=+0.118854603 container start 4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shtern, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.494087633 +0000 UTC m=+0.026946657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.588624228 +0000 UTC m=+0.121483242 container attach 4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shtern, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:33.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:33.618Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:33.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:33 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:33.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:33 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:13:33 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]: {
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:    "0": [
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:        {
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "devices": [
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "/dev/loop3"
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            ],
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "lv_name": "ceph_lv0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "lv_size": "21470642176",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=43df7a30-cf5f-5209-adfd-bf44298b19f2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=64f30ffd-1e43-4897-997f-ebad3f519f02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "lv_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "name": "ceph_lv0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "tags": {
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.block_uuid": "toDgRY-v34C-eczH-ZPqC-Fhji-njJm-ccAwmg",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.cephx_lockbox_secret": "",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.cluster_fsid": "43df7a30-cf5f-5209-adfd-bf44298b19f2",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.cluster_name": "ceph",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.crush_device_class": "",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.encrypted": "0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.osd_fsid": "64f30ffd-1e43-4897-997f-ebad3f519f02",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.osd_id": "0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.type": "block",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.vdo": "0",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:                "ceph.with_tpm": "0"
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            },
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "type": "block",
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:            "vg_name": "ceph_vg0"
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:        }
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]:    ]
Jan 22 05:13:33 np0005591760 sleepy_shtern[282461]: }
Jan 22 05:13:33 np0005591760 systemd[1]: libpod-4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a.scope: Deactivated successfully.
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.865682174 +0000 UTC m=+0.398541189 container died 4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 05:13:33 np0005591760 systemd[1]: var-lib-containers-storage-overlay-8ebe9e4ec7220b7db05cc282d5d695c8fd915b76539a0c46f307a02e2d9f03b1-merged.mount: Deactivated successfully.
Jan 22 05:13:33 np0005591760 podman[282448]: 2026-01-22 10:13:33.890624468 +0000 UTC m=+0.423483483 container remove 4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shtern, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 22 05:13:33 np0005591760 systemd[1]: libpod-conmon-4559d69cb0955d6eeb3aa1c38702355438315b9c7df3311ffe898e1b423fd83a.scope: Deactivated successfully.
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.386563969 +0000 UTC m=+0.034799611 container create d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_cori, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:13:34 np0005591760 systemd[1]: Started libpod-conmon-d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b.scope.
Jan 22 05:13:34 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.447228365 +0000 UTC m=+0.095464007 container init d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.453603071 +0000 UTC m=+0.101838713 container start d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.455253645 +0000 UTC m=+0.103489286 container attach d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 22 05:13:34 np0005591760 loving_cori[282576]: 167 167
Jan 22 05:13:34 np0005591760 systemd[1]: libpod-d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b.scope: Deactivated successfully.
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.4592749 +0000 UTC m=+0.107510542 container died d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_cori, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.373069191 +0000 UTC m=+0.021304854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:13:34 np0005591760 systemd[1]: var-lib-containers-storage-overlay-1400b6d45e0e7a9cc20229ea25ba204ae82e37ba605818c6f225ef3ae6dc7aec-merged.mount: Deactivated successfully.
Jan 22 05:13:34 np0005591760 podman[282562]: 2026-01-22 10:13:34.484404559 +0000 UTC m=+0.132640200 container remove d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 05:13:34 np0005591760 systemd[1]: libpod-conmon-d8fc5af63a8c764158ae88c9a7b1aab96e94c16e0a5af942eb17578837c2149b.scope: Deactivated successfully.
Jan 22 05:13:34 np0005591760 podman[282599]: 2026-01-22 10:13:34.62783885 +0000 UTC m=+0.034795103 container create e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hofstadter, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 05:13:34 np0005591760 systemd[1]: Started libpod-conmon-e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888.scope.
Jan 22 05:13:34 np0005591760 systemd[1]: Started libcrun container.
Jan 22 05:13:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1024b22ef3c1f33911bfaa78c96c5795d53949c3b1157c146f3e206b172ff57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1024b22ef3c1f33911bfaa78c96c5795d53949c3b1157c146f3e206b172ff57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1024b22ef3c1f33911bfaa78c96c5795d53949c3b1157c146f3e206b172ff57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:34 np0005591760 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1024b22ef3c1f33911bfaa78c96c5795d53949c3b1157c146f3e206b172ff57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 05:13:34 np0005591760 podman[282599]: 2026-01-22 10:13:34.691394771 +0000 UTC m=+0.098351034 container init e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hofstadter, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 05:13:34 np0005591760 podman[282599]: 2026-01-22 10:13:34.69702939 +0000 UTC m=+0.103985643 container start e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hofstadter, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 05:13:34 np0005591760 podman[282599]: 2026-01-22 10:13:34.69818984 +0000 UTC m=+0.105146093 container attach e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hofstadter, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 22 05:13:34 np0005591760 podman[282599]: 2026-01-22 10:13:34.61483042 +0000 UTC m=+0.021786693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 22 05:13:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:35 np0005591760 angry_hofstadter[282612]: {}
Jan 22 05:13:35 np0005591760 lvm[282690]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:13:35 np0005591760 lvm[282690]: VG ceph_vg0 finished
Jan 22 05:13:35 np0005591760 nova_compute[248045]: 2026-01-22 10:13:35.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:35 np0005591760 nova_compute[248045]: 2026-01-22 10:13:35.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:35 np0005591760 nova_compute[248045]: 2026-01-22 10:13:35.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:35 np0005591760 systemd[1]: libpod-e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888.scope: Deactivated successfully.
Jan 22 05:13:35 np0005591760 systemd[1]: libpod-e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888.scope: Consumed 1.031s CPU time.
Jan 22 05:13:35 np0005591760 podman[282599]: 2026-01-22 10:13:35.325060846 +0000 UTC m=+0.732017099 container died e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hofstadter, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 22 05:13:35 np0005591760 systemd[1]: var-lib-containers-storage-overlay-a1024b22ef3c1f33911bfaa78c96c5795d53949c3b1157c146f3e206b172ff57-merged.mount: Deactivated successfully.
Jan 22 05:13:35 np0005591760 podman[282599]: 2026-01-22 10:13:35.351688058 +0000 UTC m=+0.758644311 container remove e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 05:13:35 np0005591760 systemd[1]: libpod-conmon-e38d7f87238a27b113ffa49b8ce2a3bacd870245e846f14e4b3a7eac5bf5e888.scope: Deactivated successfully.
Jan 22 05:13:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 22 05:13:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:35 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 22 05:13:35 np0005591760 ceph-mon[74254]: log_channel(audit) log [INF] : from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:35 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:35 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:35 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:35 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:13:35 np0005591760 nova_compute[248045]: 2026-01-22 10:13:35.698 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:36 np0005591760 ceph-mon[74254]: from='mgr.14664 ' entity='mgr.compute-0.rfmoog' 
Jan 22 05:13:37 np0005591760 podman[282754]: 2026-01-22 10:13:37.048332357 +0000 UTC m=+0.039363242 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 05:13:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:37.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:37.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:37.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:37.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:37 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:37 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:37 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:37 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:13:37 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:37] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:13:37 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:37 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:38 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:38 np0005591760 nova_compute[248045]: 2026-01-22 10:13:38.209 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:38 np0005591760 nova_compute[248045]: 2026-01-22 10:13:38.300 248049 DEBUG oslo_service.periodic_task [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 05:13:38 np0005591760 nova_compute[248045]: 2026-01-22 10:13:38.300 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 05:13:38 np0005591760 nova_compute[248045]: 2026-01-22 10:13:38.312 248049 DEBUG nova.compute.manager [None req-882ebcc7-e29c-4187-a11b-f077fca9c87c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 05:13:38 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:38.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:38.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:38.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:38 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:38.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:39 np0005591760 podman[282772]: 2026-01-22 10:13:39.070434518 +0000 UTC m=+0.063602748 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 05:13:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:39 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:39 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:39 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:39.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:39 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:13:40 np0005591760 nova_compute[248045]: 2026-01-22 10:13:40.699 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:41 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:41 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:41 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:41 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:42 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:43 np0005591760 nova_compute[248045]: 2026-01-22 10:13:43.210 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:43 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:43 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:43 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:43.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:43.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:43.617Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:43.617Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:43 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:43.618Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:43 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:43 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:45.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:45 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:45 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:45 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:45.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:45 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:45 np0005591760 nova_compute[248045]: 2026-01-22 10:13:45.701 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:47.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:47.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:47.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:47.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:47.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:13:47.334 164103 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 05:13:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:13:47.335 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 05:13:47 np0005591760 ovn_metadata_agent[164098]: 2026-01-22 10:13:47.335 164103 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 05:13:47 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:47 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:47 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:47 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:13:47 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:47] "GET /metrics HTTP/1.1" 200 48600 "" "Prometheus/2.51.0"
Jan 22 05:13:47 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:47 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:48 np0005591760 nova_compute[248045]: 2026-01-22 10:13:48.211 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:48 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:48.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:48.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:48.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:48 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:48.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:49.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Optimize plan auto_2026-01-22_10:13:49
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] do_upmap
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control', '.nfs', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [balancer INFO root] prepared 0/10 upmap changes
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:13:49 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:49 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:49 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 05:13:49 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:50 np0005591760 nova_compute[248045]: 2026-01-22 10:13:50.702 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 22 05:13:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3776065012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 05:13:51 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 22 05:13:51 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3776065012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 05:13:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:51.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:51 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:51 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:51 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:51 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:51 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:52 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:52 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:53.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:53 np0005591760 nova_compute[248045]: 2026-01-22 10:13:53.212 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:53 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:53 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:53 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:53.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:53.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:53.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:53 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:53.625Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:53 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:53 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:54 np0005591760 systemd-logind[747]: New session 58 of user zuul.
Jan 22 05:13:54 np0005591760 systemd[1]: Started Session 58 of User zuul.
Jan 22 05:13:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:55.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:55 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:55 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:55 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:55 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:13:55 np0005591760 nova_compute[248045]: 2026-01-22 10:13:55.702 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28934 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:13:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.18987 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:13:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28960 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:13:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28949 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:13:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19002 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:13:56 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28969 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:56 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:13:57 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:13:57 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 22 05:13:57 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/894796064' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:57.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:57.148Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:57.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:57.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:57.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:57 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:57 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:57 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:57.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:57 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:13:57 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:13:57] "GET /metrics HTTP/1.1" 200 48601 "" "Prometheus/2.51.0"
Jan 22 05:13:57 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:58 np0005591760 nova_compute[248045]: 2026-01-22 10:13:58.214 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:13:58 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:13:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:58.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:58.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:58.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:58 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:13:58.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:13:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:13:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:13:59.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:13:59 np0005591760 ovs-vsctl[283130]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 05:13:59 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:13:59 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:13:59 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:13:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 22 05:13:59 np0005591760 ceph-mgr[74522]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 22 05:13:59 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 05:13:59 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 05:14:00 np0005591760 virtqemud[247788]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 05:14:00 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: cache status {prefix=cache status} (starting...)
Jan 22 05:14:00 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:00 np0005591760 lvm[283446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 05:14:00 np0005591760 lvm[283446]: VG ceph_vg0 finished
Jan 22 05:14:00 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: client ls {prefix=client ls} (starting...)
Jan 22 05:14:00 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:00 np0005591760 nova_compute[248045]: 2026-01-22 10:14:00.703 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:00 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.28985 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19029 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3022234077' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085221840' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 05:14:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:01.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29002 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29005 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19056 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978574551' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 05:14:01 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:01 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:01 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:01.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1580292745' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29020 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29026 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19080 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 22 05:14:01 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/983171682' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 05:14:01 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29051 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:01 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29053 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:14:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:14:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:01 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:14:02 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:02 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:14:02 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 05:14:02 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19107 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 22 05:14:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3470835989' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 05:14:02 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 22 05:14:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119794769' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29081 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29093 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: ops {prefix=ops} (starting...)
Jan 22 05:14:02 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19146 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29129 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 22 05:14:02 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226485372' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29119 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:02 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19170 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: session ls {prefix=session ls} (starting...)
Jan 22 05:14:03 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz Can't run that command on an inactive MDS!
Jan 22 05:14:03 np0005591760 ceph-mds[96037]: mds.cephfs.compute-0.xazhzz asok_command: status {prefix=status} (starting...)
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 05:14:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:03.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:03 np0005591760 nova_compute[248045]: 2026-01-22 10:14:03.214 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:03 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29149 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790598362' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 05:14:03 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:03 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:03 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408776928' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:03.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:03.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:03.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:03.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:03 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270074109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:14:03 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29210 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T10:14:03.891+0000 7ff39ecc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:14:03 np0005591760 ceph-mgr[74522]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:14:03 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29222 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:03 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T10:14:03.981+0000 7ff39ecc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:14:03 np0005591760 ceph-mgr[74522]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946716011' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2625219139' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 05:14:04 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29224 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:04 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: 2026-01-22T10:14:04.393+0000 7ff39ecc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:14:04 np0005591760 ceph-mgr[74522]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147473443' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904635606' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 22 05:14:04 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610741303' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 05:14:04 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29257 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29285 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:05.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19314 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29306 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29287 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:05 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:05 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:05 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 22 05:14:05 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863189932' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29339 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 nova_compute[248045]: 2026-01-22 10:14:05.704 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29317 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:05 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19365 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29341 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931417 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.593015671s of 14.594431877s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 4497408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931549 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 4464640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933061 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 4440064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 4440064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 4431872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932470 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 4431872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 4431872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4423680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.810277939s of 14.815903664s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de1e0800 session 0x5581de12e960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc002400 session 0x5581df2c43c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4423680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932338 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932338 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.853135109s of 10.854328156s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932470 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 4349952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935494 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4333568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935494 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.043842316s of 12.046784401s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934903 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4268032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4268032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4268032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 4259840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 4259840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4251648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4251648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4251648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 4243456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 4243456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 4235264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 4227072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 4227072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 4218880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 4218880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 4210688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4145152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4145152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4096000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc000400 session 0x5581df173860
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc000800 session 0x5581df173680
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934771 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.925277710s of 60.927970886s, submitted: 2
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8499 writes, 32K keys, 8499 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 8499 writes, 2181 syncs, 3.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8499 writes, 32K keys, 8499 commit groups, 1.0 writes per commit group, ingest: 20.89 MB, 0.03 MB/s#012Interval WAL: 8499 writes, 2181 syncs, 3.90 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934903 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 3956736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 3956736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936415 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 3940352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 3923968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.039274216s of 12.042833328s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 3923968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935233 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 3923968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3915776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3915776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3915776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 3907584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 3907584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 3899392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3883008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3883008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3874816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3874816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3874816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3866624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3866624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3858432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3858432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 3850240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 3850240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 3850240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 3842048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 3842048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3833856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3833856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3825664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3825664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3825664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de1de400 session 0x5581de6a21e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd27000 session 0x5581df2c5e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3817472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3817472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3817472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3809280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3801088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3792896 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935101 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3792896 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3784704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.245330811s of 38.248317719s, submitted: 2
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3784704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3776512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935233 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3760128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3768320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3760128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938257 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3760128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3751936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3751936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3743744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3743744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937666 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3743744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3735552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3735552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.062061310s of 16.066551208s, submitted: 4
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3727360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3727360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3727360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3719168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3719168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3645440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df351000 session 0x5581ddfd2960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de1de000 session 0x5581ddfd0f00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.643405914s of 28.714008331s, submitted: 119
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937534 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3194880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 3186688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 3186688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 3186688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.088833809s of 12.240459442s, submitted: 237
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939178 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939178 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.970689774s of 12.974460602s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de159800 session 0x5581de142780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581db44f400 session 0x5581df2c4000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 3178496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937864 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937864 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3170304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.103719711s of 12.104599953s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3153920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df317c00 session 0x5581dfa78000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.872491837s of 16.874778748s, submitted: 2
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937273 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3145728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937273 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df326000 session 0x5581dee7eb40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df314000 session 0x5581de12e3c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937405 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.118247986s of 23.120439529s, submitted: 2
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937273 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938917 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 3137536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940429 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 3112960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940429 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.631116867s of 19.636159897s, submitted: 4
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 3104768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd88400 session 0x5581df2c5860
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd27000 session 0x5581dee7e000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 3096576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581db44cc00 session 0x5581de0ae960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dcf5e800 session 0x5581de0ae780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 3096576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940297 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.061443329s of 17.062528610s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 3088384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940561 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942073 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.044746399s of 12.049272537s, submitted: 4
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940891 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de40c000 session 0x5581dfa78960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df314400 session 0x5581dfa78780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940627 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940627 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 3080192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.199202538s of 15.202738762s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940759 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de72b800 session 0x5581de6a3e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581de158000 session 0x5581df2c4d20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940759 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940759 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.810004234s of 14.811164856s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940891 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942271 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942271 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942271 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.929155350s of 17.932794571s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df325400 session 0x5581dfa79680
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 3039232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942139 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 75.419815063s of 75.422309875s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943783 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943192 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943192 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.829685211s of 16.833559036s, submitted: 3
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3014656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581dc9e7400 session 0x5581dcb7ad20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581ddd6b400 session 0x5581ddd063c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3006464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2998272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df327400 session 0x5581de00cf00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 ms_handle_reset con 0x5581df316c00 session 0x5581ded763c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943060 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 105.914726257s of 105.915969849s, submitted: 1
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 2981888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943324 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944836 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.300821304s of 10.306298256s, submitted: 4
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945757 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945625 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 2957312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc671000/0x0/0x4ffc00000, data 0xf714b/0x19b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.359743118s of 10.379639626s, submitted: 25
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 139 ms_handle_reset con 0x5581ddd86800 session 0x5581ded77a40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 2908160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 2883584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959926 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 139 ms_handle_reset con 0x5581ddd86800 session 0x5581de00c1e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 2842624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc664000/0x0/0x4ffc00000, data 0xfd48f/0x1a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xff597/0x1a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 2834432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0xff597/0x1a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 2924544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 2924544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 2908160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 2908160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 2899968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964082 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581ded91800 session 0x5581de6a3e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581df315c00 session 0x5581ded99e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581db44ec00 session 0x5581ded99a40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 2916352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.062061310s of 38.074607849s, submitted: 38
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581ddd89c00 session 0x5581ded99860
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 2891776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581db44ec00 session 0x5581ded990e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x101569/0x1ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 ms_handle_reset con 0x5581ddd86800 session 0x5581ded98960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 2883584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 2875392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581ded91800 session 0x5581ded98000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581db44dc00 session 0x5581df50c1e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581df316400 session 0x5581df50c3c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581df316400 session 0x5581df50c5a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 ms_handle_reset con 0x5581db44dc00 session 0x5581df50c960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 2867200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972498 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 2867200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc658000/0x0/0x4ffc00000, data 0x1057a5/0x1b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 2867200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 2859008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 2859008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 2859008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972498 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc658000/0x0/0x4ffc00000, data 0x1057a5/0x1b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974492 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc656000/0x0/0x4ffc00000, data 0x107777/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 1777664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc656000/0x0/0x4ffc00000, data 0x107777/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.917074203s of 17.944137573s, submitted: 28
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90275840 unmapped: 3006464 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022404 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 2924544 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 2924544 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002c00 session 0x5581ded772c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd88400 session 0x5581de16a780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028316 data_alloc: 218103808 data_used: 163840
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40c000 session 0x5581dfa79a40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581dfa78780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028316 data_alloc: 218103808 data_used: 163840
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.525476456s of 14.562845230s, submitted: 39
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028448 data_alloc: 218103808 data_used: 163840
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 2859008 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90267648 unmapped: 3014656 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df327000 session 0x5581dfa790e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581dee7e960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90267648 unmapped: 3014656 heap: 93282304 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbf60000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b400 session 0x5581dee7f680
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351800 session 0x5581de8cdc20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 15704064 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb70d000/0x0/0x4ffc00000, data 0x10507d9/0x10ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092105 data_alloc: 218103808 data_used: 163840
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 15671296 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092105 data_alloc: 218103808 data_used: 163840
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb70d000/0x0/0x4ffc00000, data 0x10507d9/0x10ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.144878387s of 11.177850723s, submitted: 35
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df230000 session 0x5581df44b0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb70d000/0x0/0x4ffc00000, data 0x10507d9/0x10ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 90595328 unmapped: 15810560 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 11755520 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb6e8000/0x0/0x4ffc00000, data 0x10747fc/0x1124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153696 data_alloc: 218103808 data_used: 8896512
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 7815168 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153564 data_alloc: 218103808 data_used: 8896512
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb6e8000/0x0/0x4ffc00000, data 0x10747fc/0x1124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.294046402s of 10.303358078s, submitted: 9
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 4390912 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3702784 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa2fa000/0x0/0x4ffc00000, data 0x12ba7fc/0x136a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3702784 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3702784 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa2fa000/0x0/0x4ffc00000, data 0x12ba7fc/0x136a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181628 data_alloc: 218103808 data_used: 9027584
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 3670016 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000800 session 0x5581ddf1e3c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44dc00 session 0x5581df44b4a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e0400 session 0x5581dee7ef00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa9ae000/0x0/0x4ffc00000, data 0x7fe777/0x8ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035432 data_alloc: 218103808 data_used: 163840
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10469376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44d800 session 0x5581df50cb40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.000518799s of 12.062762260s, submitted: 87
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf78800 session 0x5581de2192c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985314 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581dfa79860
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d000 session 0x5581de15b4a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985314 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9768 writes, 35K keys, 9768 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9768 writes, 2784 syncs, 3.51 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1269 writes, 2760 keys, 1269 commit groups, 1.0 writes per commit group, ingest: 2.21 MB, 0.00 MB/s#012Interval WAL: 1269 writes, 603 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581da631350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985314 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.208404541s of 16.214429855s, submitted: 8
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985446 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 10461184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44f800 session 0x5581de0ae1e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96124928 unmapped: 13434880 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd6000/0x0/0x4ffc00000, data 0x5d9767/0x686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 14229504 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023564 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581dee7fa40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581ddfd23c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44f800 session 0x5581ddfd03c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf78800 session 0x5581ddfd0d20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 14745600 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 14745600 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059359 data_alloc: 218103808 data_used: 5009408
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 12910592 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 12894208 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 12894208 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059359 data_alloc: 218103808 data_used: 5009408
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.747314453s of 16.761581421s, submitted: 13
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 12894208 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fabd4000/0x0/0x4ffc00000, data 0x5d979a/0x688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 8904704 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa104000/0x0/0x4ffc00000, data 0x10a979a/0x1158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153515 data_alloc: 218103808 data_used: 5873664
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa067000/0x0/0x4ffc00000, data 0x114679a/0x11f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa067000/0x0/0x4ffc00000, data 0x114679a/0x11f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 8241152 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152083 data_alloc: 218103808 data_used: 5873664
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x116a79a/0x1219000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 9502720 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x116a79a/0x1219000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.747667313s of 13.811450958s, submitted: 99
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152003 data_alloc: 218103808 data_used: 5873664
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa03c000/0x0/0x4ffc00000, data 0x117179a/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152307 data_alloc: 218103808 data_used: 5881856
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9388032 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa030000/0x0/0x4ffc00000, data 0x117d79a/0x122c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 9322496 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df230c00 session 0x5581dedfaf00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581dded9a40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e000 session 0x5581de16a780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa030000/0x0/0x4ffc00000, data 0x117d79a/0x122c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581ded510e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351000 session 0x5581df44a780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12951552 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.485546112s of 27.501417160s, submitted: 25
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001800 session 0x5581ddf55680
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12943360 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df326400 session 0x5581de142f00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa970000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1de000 session 0x5581ddd065a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12918784 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 12918784 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40cc00 session 0x5581de165e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12935168 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029680 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd89400 session 0x5581dcb7b2c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12935168 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca5000/0x0/0x4ffc00000, data 0x50a767/0x5b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446400 session 0x5581de1421e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446800 session 0x5581de1430e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 12623872 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 13320192 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fac80000/0x0/0x4ffc00000, data 0x52e777/0x5dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062130 data_alloc: 218103808 data_used: 4358144
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fac80000/0x0/0x4ffc00000, data 0x52e777/0x5dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062262 data_alloc: 218103808 data_used: 4358144
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 11878400 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fac80000/0x0/0x4ffc00000, data 0x52e777/0x5dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.377078056s of 14.464314461s, submitted: 133
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 11870208 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103817216 unmapped: 6799360 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151431 data_alloc: 218103808 data_used: 5480448
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 6266880 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 7364608 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 7364608 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147603 data_alloc: 218103808 data_used: 5480448
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 7364608 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 7356416 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147603 data_alloc: 218103808 data_used: 5480448
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.256039619s of 14.299996376s, submitted: 61
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 7348224 heap: 110616576 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581dee7e1e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e400 session 0x5581df2845a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d800 session 0x5581df173a40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa27b000/0x0/0x4ffc00000, data 0xf33777/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [1])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581de69e780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ded8f400 session 0x5581de6a2000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e400 session 0x5581de16ba40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581de218780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d800 session 0x5581dee7f0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581dbf28f00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103514112 unmapped: 15114240 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 15015936 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 15015936 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df231800 session 0x5581ded99680
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 14852096 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212746 data_alloc: 218103808 data_used: 5480448
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 12017664 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x16a7787/0x1756000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 8118272 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263950 data_alloc: 234881024 data_used: 12967936
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x16a7787/0x1756000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de158400 session 0x5581de1645a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86800 session 0x5581ddedd0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 8085504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263950 data_alloc: 234881024 data_used: 12967936
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x16a7787/0x1756000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.902395248s of 15.079626083s, submitted: 262
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 4235264 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 5062656 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 5062656 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9682000/0x0/0x4ffc00000, data 0x1b2b787/0x1bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df315c00 session 0x5581dded7c20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de0bd400 session 0x5581df44a960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 5029888 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 5029888 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308938 data_alloc: 234881024 data_used: 13201408
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9682000/0x0/0x4ffc00000, data 0x1b2b787/0x1bda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 5021696 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44e400 session 0x5581dbfe8780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581de8cdc20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86800 session 0x5581ddedda40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159570 data_alloc: 218103808 data_used: 5480448
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 10387456 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.566994667s of 12.633753777s, submitted: 97
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e0800 session 0x5581dfa79c20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581de16ad20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf79c00 session 0x5581de16be00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019565 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021998 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 15171584 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021998 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.935996056s of 11.948678970s, submitted: 16
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15163392 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 15155200 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021275 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 15155200 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 15147008 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350400 session 0x5581dc0970e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102350848 unmapped: 16277504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102350848 unmapped: 16277504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc003800 session 0x5581dcb7b2c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446000 session 0x5581df1725a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102350848 unmapped: 16277504 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046543 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.933603287s of 10.947580338s, submitted: 14
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001800 session 0x5581de16bc20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102359040 unmapped: 16269312 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 102359040 unmapped: 16269312 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061068 data_alloc: 218103808 data_used: 2031616
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 16891904 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 16908288 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 16908288 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061068 data_alloc: 218103808 data_used: 2031616
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 16908288 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.591248512s of 10.594951630s, submitted: 4
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faeaf000/0x0/0x4ffc00000, data 0x2ff78a/0x3ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105873408 unmapped: 12754944 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x106d78a/0x111b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 12476416 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 12476416 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 12476416 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169896 data_alloc: 218103808 data_used: 2195456
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 12468224 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 12468224 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 12468224 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa116000/0x0/0x4ffc00000, data 0x109878a/0x1146000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa114000/0x0/0x4ffc00000, data 0x109a78a/0x1148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165672 data_alloc: 218103808 data_used: 2195456
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd86000 session 0x5581dded7e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.979077339s of 12.066130638s, submitted: 130
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x109b78a/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165896 data_alloc: 218103808 data_used: 2195456
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 13148160 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x109b78a/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x109b78a/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165420 data_alloc: 218103808 data_used: 2199552
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa112000/0x0/0x4ffc00000, data 0x109c78a/0x114a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105488384 unmapped: 13139968 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.518879890s of 10.523790359s, submitted: 4
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 13131776 heap: 118628352 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165560 data_alloc: 218103808 data_used: 2199552
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd89800 session 0x5581ded503c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd26400 session 0x5581de6a34a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ded8e800 session 0x5581dbf283c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44f000 session 0x5581ddd4bc20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350800 session 0x5581de00d4a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x150978a/0x15b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x150978a/0x15b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9ca5000/0x0/0x4ffc00000, data 0x150978a/0x15b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 20676608 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e1000 session 0x5581de16a3c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd89400 session 0x5581de8cc000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205533 data_alloc: 218103808 data_used: 2199552
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581deda6f00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351c00 session 0x5581de0af860
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 105709568 unmapped: 21315584 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c80000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c7f000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239971 data_alloc: 218103808 data_used: 6553600
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.457146645s of 14.484023094s, submitted: 27
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c7f000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c7f000/0x0/0x4ffc00000, data 0x152d79a/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 20668416 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239839 data_alloc: 218103808 data_used: 6553600
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9c6e000/0x0/0x4ffc00000, data 0x153f79a/0x15ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 15056896 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a4f000/0x0/0x4ffc00000, data 0x175879a/0x1807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265765 data_alloc: 218103808 data_used: 6672384
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 15228928 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a35000/0x0/0x4ffc00000, data 0x177879a/0x1827000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264093 data_alloc: 218103808 data_used: 6672384
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 15106048 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.666911125s of 13.710399628s, submitted: 76
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 15024128 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a2b000/0x0/0x4ffc00000, data 0x178279a/0x1831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9a2b000/0x0/0x4ffc00000, data 0x178279a/0x1831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 15024128 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 15024128 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264037 data_alloc: 218103808 data_used: 6672384
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350c00 session 0x5581df2c54a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581de188960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581de164000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa10f000/0x0/0x4ffc00000, data 0x109f78a/0x114d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175510 data_alloc: 218103808 data_used: 2203648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109649920 unmapped: 17375232 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc9e7c00 session 0x5581de16a780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d000 session 0x5581de8cc000
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041958 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 18817024 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041958 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18808832 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041958 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 18800640 heap: 127025152 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581dcb7b2c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581de69f0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581de69eb40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581de69e1e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.422857285s of 27.459486008s, submitted: 49
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581de16ab40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 22396928 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079250 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab54000/0x0/0x4ffc00000, data 0x65b767/0x708000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 22388736 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 22388736 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581de16be00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316800 session 0x5581de16bc20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000400 session 0x5581de16b0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc000c00 session 0x5581de16b860
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 22380544 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 22380544 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 21143552 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120212 data_alloc: 218103808 data_used: 5742592
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x65b78a/0x709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 21143552 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 21127168 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 21127168 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72b000 session 0x5581ddedd0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581de16ad20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581df1725a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045868 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045868 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 24322048 heap: 130703360 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.282363892s of 20.319124222s, submitted: 42
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446400 session 0x5581de219a40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 27197440 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118840 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 27197440 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa75e000/0x0/0x4ffc00000, data 0xa51767/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118840 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df351400 session 0x5581de0aed20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 22863872 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa75e000/0x0/0x4ffc00000, data 0xa51767/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 22863872 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dffe0c00 session 0x5581de69e5a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df325c00 session 0x5581de728960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6b400 session 0x5581df44ba40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de72a800 session 0x5581de1652c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002000 session 0x5581dcb7ad20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052673 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.647817612s of 26.688156128s, submitted: 45
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40d000 session 0x5581df44a780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079825 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 27049984 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108229 data_alloc: 218103808 data_used: 4349952
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faca8000/0x0/0x4ffc00000, data 0x507767/0x5b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 26886144 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108229 data_alloc: 218103808 data_used: 4349952
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.121374130s of 14.125967026s, submitted: 2
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 25067520 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109420544 unmapped: 24961024 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 24952832 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 24952832 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 24952832 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140959 data_alloc: 218103808 data_used: 4636672
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140827 data_alloc: 218103808 data_used: 4636672
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 24944640 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e0400 session 0x5581ddedc5a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 24936448 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa95c000/0x0/0x4ffc00000, data 0x853767/0x900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.648491859s of 13.678412437s, submitted: 21
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002000 session 0x5581de16a3c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054817 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054817 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054817 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 28377088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002c00 session 0x5581dcb7ab40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446c00 session 0x5581dded83c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581dee7f2c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001000 session 0x5581df44ad20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.103810310s of 15.110246658s, submitted: 7
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002000 session 0x5581de6a2780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002c00 session 0x5581de6a0f00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2400 session 0x5581ddf54780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df446c00 session 0x5581de16a960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df350c00 session 0x5581de6a1e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131456 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa6f6000/0x0/0x4ffc00000, data 0xab77d9/0xb66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc003c00 session 0x5581dee7fc20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de40dc00 session 0x5581de8cd0e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26468352 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002400 session 0x5581de15ba40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001400 session 0x5581df1721e0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 26443776 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 23773184 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191323 data_alloc: 218103808 data_used: 8990720
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 23773184 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110608384 unmapped: 23773184 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa6f5000/0x0/0x4ffc00000, data 0xab77fc/0xb67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191323 data_alloc: 218103808 data_used: 8990720
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 23740416 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581de729c20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001400 session 0x5581de15ad20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc002400 session 0x5581de189e00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc003c00 session 0x5581df50c780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.102285385s of 13.141888618s, submitted: 41
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc8f2800 session 0x5581de218f00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa6f5000/0x0/0x4ffc00000, data 0xab77fc/0xb67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 22568960 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 19218432 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 17113088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300899 data_alloc: 234881024 data_used: 9269248
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 17113088 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 16547840 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9603000/0x0/0x4ffc00000, data 0x17987fc/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 10903552 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 10903552 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 10903552 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347715 data_alloc: 234881024 data_used: 16130048
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9603000/0x0/0x4ffc00000, data 0x17987fc/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 10649600 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0x17b77fc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 10649600 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0x17b77fc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343099 data_alloc: 234881024 data_used: 16134144
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0x17b77fc/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 10805248 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.721275330s of 13.793152809s, submitted: 87
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 7127040 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 7061504 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 7061504 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 7061504 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416605 data_alloc: 234881024 data_used: 16449536
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127328256 unmapped: 7053312 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c45000/0x0/0x4ffc00000, data 0x21577fc/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 7036928 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c45000/0x0/0x4ffc00000, data 0x21577fc/0x2207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 127377408 unmapped: 7004160 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413933 data_alloc: 234881024 data_used: 16449536
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c1e000/0x0/0x4ffc00000, data 0x217e7fc/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c1e000/0x0/0x4ffc00000, data 0x217e7fc/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413933 data_alloc: 234881024 data_used: 16449536
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f8c1e000/0x0/0x4ffc00000, data 0x217e7fc/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 7864320 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df4d0400 session 0x5581ddf55c20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.627637863s of 16.701566696s, submitted: 114
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc001400 session 0x5581de2185a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247843 data_alloc: 234881024 data_used: 9273344
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 13770752 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x10d97fc/0x1189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581db44cc00 session 0x5581de12fa40
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 13762560 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df317000 session 0x5581de15be00
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4faad2000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074156 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 20504576 heap: 134381568 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.855703354s of 18.884504318s, submitted: 43
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dc9e7000 session 0x5581dcb7a780
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df4d0000 session 0x5581ddd4a3c0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142047 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24952832 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190383 data_alloc: 218103808 data_used: 7307264
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190383 data_alloc: 218103808 data_used: 7307264
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 25542656 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.366592407s of 16.391584396s, submitted: 36
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 24051712 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fa483000/0x0/0x4ffc00000, data 0x91b7c9/0x9c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272121 data_alloc: 218103808 data_used: 7716864
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab54000/0x0/0x4ffc00000, data 0x12697c9/0x1317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268097 data_alloc: 218103808 data_used: 7720960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x126b7c9/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread fragmentation_score=0.000400 took=0.000035s
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x126b7c9/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab53000/0x0/0x4ffc00000, data 0x126b7c9/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268097 data_alloc: 218103808 data_used: 7720960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.798171997s of 12.866126060s, submitted: 121
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 18202624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fab52000/0x0/0x4ffc00000, data 0x126c7c9/0x131a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581de1e1400 session 0x5581de6a0960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ded8e400 session 0x5581dcb7b680
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 22323200 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 22315008 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 22306816 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 22298624 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 22282240 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 22282240 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 22274048 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 22290432 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config diff' '{prefix=config diff}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config show' '{prefix=config show}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 22511616 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 22413312 heap: 138584064 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'log dump' '{prefix=log dump}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 22691840 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'perf dump' '{prefix=perf dump}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'perf schema' '{prefix=perf schema}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 33775616 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 33767424 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 33759232 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 33751040 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 33751040 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 33751040 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 33751040 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 33742848 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 33734656 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 33726464 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 33718272 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 33710080 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 33701888 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 33693696 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 33693696 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 33693696 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 33693696 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 33693696 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 33693696 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 4083 syncs, 3.10 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2892 writes, 9813 keys, 2892 commit groups, 1.0 writes per commit group, ingest: 11.69 MB, 0.02 MB/s#012Interval WAL: 2892 writes, 1299 syncs, 2.23 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 33685504 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 33677312 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 33669120 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 33660928 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 33652736 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 33644544 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 33636352 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581dbf40800 session 0x5581dbf285a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb8b7000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084925 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 255.396591187s of 255.411026001s, submitted: 24
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 33628160 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [1])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 33570816 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd26c00 session 0x5581de0ae960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.414360046s of 23.477830887s, submitted: 118
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 33562624 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df314400 session 0x5581ded76d20
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 33505280 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084705 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 33464320 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581ddd6ac00 session 0x5581db3ce960
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 33398784 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 33382400 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 33382400 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 33382400 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 33382400 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 33374208 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 33366016 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 33357824 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 33357824 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 33357824 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: mgrc ms_handle_reset ms_handle_reset con 0x5581ddd26800
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1082790531
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1082790531,v1:192.168.122.100:6801/1082790531]
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: mgrc handle_mgr_configure stats_period=5
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 33275904 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 33267712 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 33259520 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 33251328 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 33243136 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 33234944 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173042624' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 33226752 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 33218560 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 33210368 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 33202176 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 33193984 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 33185792 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 33177600 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 33169408 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 33161216 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 33153024 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 33144832 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 33136640 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 33128448 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 33120256 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 33120256 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 33120256 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 33120256 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 33120256 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 33120256 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 33112064 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 33103872 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 33095680 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 ms_handle_reset con 0x5581df316400 session 0x5581dfa785a0
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 33095680 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 33095680 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 33095680 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 33087488 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 33079296 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 33071104 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 33054720 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 33038336 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 33038336 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 33038336 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 33038336 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 33038336 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 33038336 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 33030144 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 33021952 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 33013760 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 33013760 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 33013760 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 33013760 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 33013760 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 33005568 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 33005568 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 33005568 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 46K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4437 syncs, 3.01 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 708 writes, 1064 keys, 708 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 708 writes, 354 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 32997376 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 32989184 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 32972800 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 32964608 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config diff' '{prefix=config diff}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 32948224 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config show' '{prefix=config show}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 32931840 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbcb8000/0x0/0x4ffc00000, data 0x107767/0x1b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 33046528 heap: 149626880 old mem: 2845415832 new mem: 2845415832
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084633 data_alloc: 218103808 data_used: 155648
Jan 22 05:14:06 np0005591760 ceph-osd[82185]: do_command 'log dump' '{prefix=log dump}'
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29368 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29390 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19413 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 22 05:14:06 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954432016' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 05:14:06 np0005591760 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29386 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:06 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29417 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:06 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:07 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:07.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:07.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:07.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:07.142Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:07.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29416 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19464 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:07 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:07 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:07 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:07.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29440 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29446 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-mgr-compute-0-rfmoog[74518]: ::ffff:192.168.122.100 - - [22/Jan/2026:10:14:07] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: [prometheus INFO cherrypy.access.140683796032864] ::ffff:192.168.122.100 - - [22/Jan/2026:10:14:07] "GET /metrics HTTP/1.1" 200 48595 "" "Prometheus/2.51.0"
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19491 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29470 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29495 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:07 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 22 05:14:07 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377591731' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19515 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 podman[284868]: 2026-01-22 10:14:08.070469186 +0000 UTC m=+0.064343827 container health_status ab95c899e45b27b836807a64b5b44929b66bdbdd89e60cfe945f56ecc8d78a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29497 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 nova_compute[248045]: 2026-01-22 10:14:08.215 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29522 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2428409639' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/625254246' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3633694080' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19554 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29549 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971205236' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707293425' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19587 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:08.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2153074600' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 22 05:14:08 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194473199' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 05:14:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:09.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:09.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:09 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:09.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402410862' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 05:14:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.001000011s ======
Jan 22 05:14:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970671608' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3737866530' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020065041' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 05:14:09 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:09 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:09 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:09.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:09 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 22 05:14:09 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271927536' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 05:14:10 np0005591760 podman[285161]: 2026-01-22 10:14:10.106455992 +0000 UTC m=+0.101395476 container health_status b2a735ce4a5a698b378c642ed18ce8dd172e303f898478909117158eb505816a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3fe82618a1e232724f6de40ae7476ca4639ac3a88c6a67055315a726c890e06f-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54-b8d92c5fefbc8e81a5a2514e924a171d5c75100e3dcee9501a4dc3acd576af54'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502224251' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647610031' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388930613' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 22 05:14:10 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621027738' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 05:14:10 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29702 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:10 np0005591760 nova_compute[248045]: 2026-01-22 10:14:10.705 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19758 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29720 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 22 05:14:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471761252' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 22 05:14:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/906921113' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29728 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19788 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19776 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 22 05:14:11 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/245176608' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29762 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:11 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:11 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:11 np0005591760 systemd[1]: Starting Hostname Service...
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29768 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:11 np0005591760 systemd[1]: Started Hostname Service.
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19812 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29758 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:11 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29792 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:11 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 22 05:14:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 22 05:14:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 22 05:14:12 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-nfs-cephfs-2-0-compute-0-ylzmiu[259450]: 22/01/2026 10:14:12 : epoch 6971f4b6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29782 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 22 05:14:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439672248' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29807 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19851 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19875 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29837 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29831 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 22 05:14:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091181594' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19893 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29867 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29848 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:12 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 22 05:14:12 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242821358' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19914 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29872 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29885 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 05:14:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:13.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:13 np0005591760 nova_compute[248045]: 2026-01-22 10:14:13.217 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512717093' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19935 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29896 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.19962 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 05:14:13 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:13 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:13 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 05:14:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:13.613Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 05:14:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:13.645Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591760.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591760.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:13.645Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591761.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591761.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:13 np0005591760 ceph-43df7a30-cf5f-5209-adfd-bf44298b19f2-alertmanager-compute-0[105559]: ts=2026-01-22T10:14:13.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005591762.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005591762.shiftstack on 192.168.122.80:53: no such host"
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 22 05:14:13 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 05:14:13 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.29932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 05:14:14 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.30017 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:14 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 22 05:14:14 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994207522' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 05:14:14 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.20088 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:14 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.30049 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 05:14:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 22 05:14:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194669517' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 05:14:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.102 - anonymous [22/Jan/2026:10:14:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:15 np0005591760 radosgw[95028]: ====== starting new request req=0x7fe7888fc5d0 =====
Jan 22 05:14:15 np0005591760 radosgw[95028]: ====== req done req=0x7fe7888fc5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 05:14:15 np0005591760 radosgw[95028]: beast: 0x7fe7888fc5d0: 192.168.122.100 - anonymous [22/Jan/2026:10:14:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 05:14:15 np0005591760 ceph-mgr[74522]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 22 05:14:15 np0005591760 nova_compute[248045]: 2026-01-22 10:14:15.706 248049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 05:14:15 np0005591760 ceph-mon[74254]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 22 05:14:15 np0005591760 ceph-mon[74254]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671481927' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 05:14:16 np0005591760 ceph-mgr[74522]: log_channel(audit) log [DBG] : from='client.30074 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
